To see the other types of publications on this topic, follow the link: Network Error Correcting Codes.

Dissertations / Theses on the topic 'Network Error Correcting Codes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Network Error Correcting Codes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shen, Bingxin. "Application of Error Correction Codes in Wireless Sensor Networks." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/ShenB2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tezeren, Serdar U. "Reed-Muller codes in error correction in wireless adhoc networks." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FTezeren.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Murali Tummala, Roberto Cristi. Includes bibliographical references (p. 133-134). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Yen-Chi. "Error resilient video streaming over lossy networks." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180302/unrestricted/lee%5fyen-chi%5f200312%5fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wong, Kin-Fung. "Lateral error recovery for application-level multicast /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20WONGK.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 48-52). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Xiao-an. "Trellis based decoders and neural network implementations." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hua, Nan. "Space-efficient data sketching algorithms for network applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44899.

Full text
Abstract:
Sketching techniques are widely adopted in network applications. Sketching algorithms “encode” data into succinct data structures that can later be accessed and “decoded” for various purposes, such as network measurement, accounting, anomaly detection and etc. Bloom filters and counter braids are two well-known representatives in this category. Those sketching algorithms usually need to strike a tradeoff between performance (how much information can be revealed and how fast) and cost (storage, transmission and computation). This dissertation is dedicated to the research and development of several sketching techniques including improved forms of stateful Bloom Filters, Statistical Counter Arrays and Error Estimating Codes. Bloom filter is a space-efficient randomized data structure for approximately representing a set in order to support membership queries. Bloom filter and its variants have found widespread use in many networking applications, where it is important to minimize the cost of storing and communicating network data. In this thesis, we propose a family of Bloom Filter variants augmented by rank-indexing method. We will show such augmentation can bring a significant reduction of space and also the number of memory accesses, especially when deletions of set elements from the Bloom Filter need to be supported. Exact active counter array is another important building block in many sketching algorithms, where storage cost of the array is of paramount concern. Previous approaches reduce the storage costs while either losing accuracy or supporting only passive measurements. In this thesis, we propose an exact statistics counter array architecture that can support active measurements (real-time read and write). It also leverages the aforementioned rank-indexing method and exploits statistical multiplexing to minimize the storage costs of the counter array. Error estimating coding (EEC) has recently been established as an important tool to estimate bit error rates in the transmission of packets over wireless links. In essence, the EEC problem is also a sketching problem, since the EEC codes can be viewed as a sketch of the packet sent, which is decoded by the receiver to estimate bit error rate. In this thesis, we will first investigate the asymptotic bound of error estimating coding by viewing the problem from two-party computation perspective and then investigate its coding/decoding efficiency using Fisher information analysis. Further, we develop several sketching techniques including Enhanced tug-of-war(EToW) sketch and the generalized EEC (gEEC)sketch family which can achieve around 70% reduction of sketch size with similar estimation accuracies. For all solutions proposed above, we will use theoretical tools such as information theory and communication complexity to investigate how far our proposed solutions are away from the theoretical optimal. We will show that the proposed techniques are asymptotically or empirically very close to the theoretical bounds.
APA, Harvard, Vancouver, ISO, and other styles
7

Emani, Krishna Chaitanya Suryavenkata. "Application of hybrid ARQ to controller area networks." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/pdf/Krishna_C_Emani_09007dcc804e7970.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 21, 2008) Includes bibliographical references (p. 48).
APA, Harvard, Vancouver, ISO, and other styles
8

Pishro-Nik, Hossein. "Applications of Random Graphs to Design and Analysis of LDPC Codes and Sensor Networks." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7722.

Full text
Abstract:
This thesis investigates a graph and information theoretic approach to design and analysis of low-density parity-check (LDPC) codes and wireless networks. In this work, both LDPC codes and wireless networks are considered as random graphs. This work proposes solutions to important theoretic and practical open problems in LDPC coding, and for the first time introduces a framework for analysis of finite wireless networks. LDPC codes are considered to be one of the best classes of error-correcting codes. In this thesis, several problems in this area are studied. First, an improved decoding algorithm for LDPC codes is introduced. Compared to the standard iterative decoding, the proposed decoding algorithm can result in several orders of magnitude lower bit error rates, while having almost the same complexity. Second, this work presents a variety of bounds on the achievable performance of different LDPC coding scenarios. Third, it studies rate-compatible LDPC codes and provides fundamental properties of these codes. It also shows guidelines for optimal design of rate-compatible codes. Finally, it studies non-uniform and unequal error protection using LDPC codes and explores their applications to data storage systems and communication networks. It presents a new error-control scheme for volume holographic memory (VHM) systems and shows that the new method can increase the storage capacity by more than fifty percent compared to previous schemes. This work also investigates the application of random graphs to the design and analysis of wireless ad hoc and sensor networks. It introduces a framework for analysis of finite wireless networks. Such framework was lacking from the literature. Using the framework, different network properties such as capacity, connectivity, coverage, and routing and security algorithms are studied. Finally, connectivity properties of large-scale sensor networks are investigated. It is shown how unreliability of sensors, link failures, and non-uniform distribution of nodes affect the connectivity of sensor networks.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Liren. "Recovery of cell loss in ATM networks using forward error correction coding techniques /." Title page, contents and summary only, 1992. http://web4.library.adelaide.edu.au/theses/09PH/09phz6332.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1993.
Copies of author's previously published articles inserted. Includes bibliographical references (leaves 179-186).
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Jian Electrical Engineering Australian Defence Force Academy UNSW. "Error resilience for video coding services over packet-based networks." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Electrical Engineering, 1999. http://handle.unsw.edu.au/1959.4/38652.

Full text
Abstract:
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
APA, Harvard, Vancouver, ISO, and other styles
11

Al-Regib, Ghassan. "Delay-constrained 3-D graphics streaming over lossy networks." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sartipi, Mina. "Modern Error Control Codes and Applications to Distributed Source Coding." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/19795.

Full text
Abstract:
This dissertation first studies two-dimensional wavelet codes (TDWCs). TDWCs are introduced as a solution to the problem of designing a 2-D code that has low decoding- complexity and has the maximum erasure-correcting property for rectangular burst erasures. The half-rate TDWCs of dimensions N1 X N2 satisfy the Reiger bound with equality for burst erasures of dimensions N1 X N2/2 and N1/2 X N2, where GCD(N1,N2) = 2. Examples of TDWC are provided that recover any rectangular burst erasure of area N1N2/2. These lattice-cyclic codes can recover burst erasures with a simple and efficient ML decoding. This work then studies the problem of distributed source coding for two and three correlated signals using channel codes. We propose to model the distributed source coding problem with a set of parallel channel that simplifies the distributed source coding to de- signing non-uniform channel codes. This design criterion improves the performance of the source coding considerably. LDPC codes are used for lossless and lossy distributed source coding, when the correlation parameter is known or unknown at the time of code design. We show that distributed source coding at the corner point using LDPC codes is simplified to non-uniform LDPC code and semi-random punctured LDPC codes for a system of two and three correlated sources, respectively. We also investigate distributed source coding at any arbitrary rate on the Slepian-Wolf rate region. This problem is simplified to designing a rate-compatible LDPC code that has unequal error protection property. This dissertation finally studies the distributed source coding problem for applications whose wireless channel is an erasure channel with unknown erasure probability. For these application, rateless codes are better candidates than LDPC codes. Non-uniform rateless codes and improved decoding algorithm are proposed for this purpose. We introduce a reliable, rate-optimal, and energy-efficient multicast algorithm that uses distributed source coding and rateless coding. The proposed multicast algorithm performs very close to network coding, while it has lower complexity and higher adaptability.
APA, Harvard, Vancouver, ISO, and other styles
13

Vellambi, Badri Narayanan. "Applications of graph-based codes in networks: analysis of capacity and design of improved algorithms." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/37091.

Full text
Abstract:
The conception of turbo codes by Berrou et al. has created a renewed interest in modern graph-based codes. Several encouraging results that have come to light since then have fortified the role these codes shall play as potential solutions for present and future communication problems. This work focuses on both practical and theoretical aspects of graph-based codes. The thesis can be broadly categorized into three parts. The first part of the thesis focuses on the design of practical graph-based codes of short lengths. While both low-density parity-check codes and rateless codes have been shown to be asymptotically optimal under the message-passing (MP) decoder, the performance of short-length codes from these families under MP decoding is starkly sub-optimal. This work first addresses the structural characterization of stopping sets to understand this sub-optimality. Using this characterization, a novel improved decoder that offers several orders of magnitude improvement in bit-error rates is introduced. Next, a novel scheme for the design of a good rate-compatible family of punctured codes is proposed. The second part of the thesis aims at establishing these codes as a good tool to develop reliable, energy-efficient and low-latency data dissemination schemes in networks. The problems of broadcasting in wireless multihop networks and that of unicast in delay-tolerant networks are investigated. In both cases, rateless coding is seen to offer an elegant means of achieving the goals of the chosen communication protocols. It was noticed that the ratelessness and the randomness in encoding process make this scheme specifically suited to such network applications. The final part of the thesis investigates an application of a specific class of codes called network codes to finite-buffer wired networks. This part of the work aims at establishing a framework for the theoretical study and understanding of finite-buffer networks. The proposed Markov chain-based method extends existing results to develop an iterative Markov chain-based technique for general acyclic wired networks. The framework not only estimates the capacity of such networks, but also provides a means to monitor network traffic and packet drop rates on various links of the network.
APA, Harvard, Vancouver, ISO, and other styles
14

Rahnavard, Nazanin. "Coding for wireless ad-hoc and sensor networks unequal error protection and efficient data broadcasting /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/26673.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Professor Faramarz Fekri; Committee Member: Professor Christopher Heil; Committee Member: Professor Ian F. Akyildiz; Committee Member: Professor James H. McClellan; Committee Member: Professor Steven W. McLaughlin. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Burch. "Neural networks and their application to metrics research." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1014859.

Full text
Abstract:
In the development of software, time and resources are limited. As a result, developers collect metrics in order to more effectively allocate resources to meet time constraints. For example, if one could collect metrics to determine, with accuracy, which modules were error-prone and which were error-free, one could allocate personnel to work only on those error-prone modules.There are three items of concern when using metrics. First, with the many different metrics that have been defined, one may not know which metrics to collect. Secondly, the amount of metrics data collected can be staggering. Thirdly, interpretation of multiple metrics may provide a better indication of error-proneness than any single metric.This thesis researched the accuracy of a neural network, an unconventional model, in building a model that can determine whether a module is error-prone from an input of a suite of metrics. The accuracy of the neural network model was compared with the accuracy of a linear regression model, a standard statistical model, that has the same input and output. In other words, we attempted to find whether metrics correlated with error-proneness. The metrics were gathered from three different software projects. The suite of metrics that was used to build the models was a subset of a larger collection of metrics that was reduced using factor analysis.The conclusion of this thesis is that, from the projects analyzed, neither the neural network model nor the logistic regression model provide acceptable accuracies for real use. We cannot conclude whether one model provides better accuracy than the other.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
16

Demircin, Mehmet Umut. "Robust video streaming over time-varying wireless networks." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24790.

Full text
Abstract:
Thesis (Ph.D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Yucel Altunbasak; Committee Member: Chuanyi Ji; Committee Member: Ghassan AlRegib; Committee Member: Ozlem Ergun; Committee Member: Russell M. Mersereau.
APA, Harvard, Vancouver, ISO, and other styles
17

Octavian, Stan. "New recursive algorithms for training feedforward multilayer perceptrons." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Boraten, Travis H. "Runtime Adaptive Scrubbing in Fault-Tolerant Network-on-Chips (NoC) Architectures." Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1397488496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kleinschmidt, João Henrique. "Propostas e analise de estrategias de controle de erros para redes de sensores sem fio." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261209.

Full text
Abstract:
Orientador: Walter da Cunha Borelli
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-12T11:16:43Z (GMT). No. of bitstreams: 1 Kleinschmidt_JoaoHenrique_D.pdf: 1743623 bytes, checksum: dec2fe8a6e5fd8bbcf251d2e24690fa8 (MD5) Previous issue date: 2008
Resumo: As redes ad hoc sem fio não necessitam de infra-estrutura fixa e utilizam ondas de rádio para transmissão de dados. Uma rede de sensores sem fio é um tipo especial de rede ad hoc composta por dispositivos sensores de baixo custo e baixa potência. Estas características fazem com que as redes ad hoc e de sensores tenham limitações de energia. Além disso, as informações transmitidas no canal sem fio têm taxas de erro altas. Para melhorar a confiabilidade dos dados enviados no canal sem fio, técnicas como retransmissão ou códigos corretores de erros podem ser usadas. Esta tese analisa e propõe diferentes estratégias de controle de erros para redes de sensores sem fio. São apresentados modelos analíticos e de simulação de técnicas de controle de erros para consumo eficiente de energia em redes de sensores. Estes modelos são adaptados aos padrões IEEE 802.15.1 (Bluetooth) e IEEE 802.15.4 (ZigBee) e são propostos novos esquemas de correção de erros personalizados e adaptativos para estes padrões. Também são propostas estratégias de controle de erros adaptativas usando valor de informação de mensagens baseadas na área de cobertura e entropia. Os resultados são obtidos para diferentes cenários de redes, condições de canal e número de saltos. A escolha do melhor esquema de controle de erros depende da qualidade do canal e da aplicação considerada.
Abstract: Wireless ad hoc networks do not necessitate fixed infrastructure and use radio waves for data transmission. A wireless sensor network is a kind of ad hoc network formed by low cost and low power sensor devices. These characteristics made ad hoc and sensor networks very energy limited. Besides, the information transmitted in the wireless channel has high error rates. In order to improve the reliability of the data sent in the channel, techniques such as retransmission and error correcting codes can be applied. This thesis analyzes and proposes different error control strategies for wireless sensor networks. It is presented an analytical and a simulation model of error control techniques for energy consumption and energy efficiency. These models are adapted to the IEEE 802.15.1 (Bluetooth) and IEEE 802.15.4 (ZigBee) standards and novel custom and adaptive error control schemes are proposed. This work also proposes adaptive error control strategies using messages informational value based on coverage area and entropy. The results are obtained for different network scenarios, channel conditions and number of hops. The choice of the best error control scheme depends on the channel quality and the application.
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
20

Gallo, Martin. "Posouzení vlivu dělícího poměru na pasivní optickou síť." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-242101.

Full text
Abstract:
This thesis deals with the most recent passive optical network standard NG-PON2, describes the sublayer model which includes error correction coding during propagation in optical fibres. Assesses the impact of split ratios using the simulation environment created from defined model and compares various scenarios. Discusses possible sources of errors of simulation model in compare to real deployment.
APA, Harvard, Vancouver, ISO, and other styles
21

Pappala, Swetha. "Device Specific Key Generation Technique for Anti-Counterfeiting Methods Using FPGA Based Physically Unclonable Functions and Artificial Intelligence." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1336613043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kosek, Peter M. "Error Correcting Codes." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417508067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Skoglund, Isabell. "Reed-Solomon Codes - Error Correcting Codes." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97343.

Full text
Abstract:
In the following pages an introduction of the error correcting codes known as Reed-Solomon codes will be presented together with different approaches for decoding. This is supplemented by a Mathematica program and a description of this program that gives an understanding in how the choice of decoding algorithms affect the time it takes to find errors in stored or transmitted information.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Xuesong. "Cartesian authentication codes from error correcting codes /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20WANGX.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ramadas, Manikantan. "STUDY OF A PROTOCOL AND A PRIORITY PARADIGM FOR DEEP SPACE DATA COMMUNICATION." Ohio University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1178789539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Hessler, Martin. "Optimization, Matroids and Error-Correcting Codes." Doctoral thesis, Linköpings universitet, Tillämpad matematik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51722.

Full text
Abstract:
The first subject we investigate in this thesis deals with optimization problems on graphs. The edges are given costs defined by the values of independent exponential random variables. We show how to calculate some or all moments of the distributions of the costs of some optimization problems on graphs. The second subject that we investigate is 1-error correcting perfect binary codes, perfect codes for short. In most work about perfect codes, two codes are considered equivalent if there is an isometric mapping between them. We call this isometric equivalence. Another type of equivalence is given if two codes can be mapped on each other using a non-singular linear map. We call this linear equivalence. A third type of equivalence is given if two codes can be mapped on each other using a composition of an isometric map and a non-singular linear map. We call this extended equivalence. In Paper 1 we give a new better bound on how much the cost of the matching problem with exponential edge costs varies from its mean. In Paper 2 we calculate the expected cost of an LP-relaxed version of the matching problem where some edges are given zero cost. A special case is when the vertices with probability 1 – p have a zero cost loop, for this problem we prove that the expected cost is given by a formula. In Paper 3 we define the polymatroid assignment problem and give a formula for calculating all moments of its cost. In Paper 4 we present a computer enumeration of the 197 isometric equivalence classes of the perfect codes of length 31 of rank 27 and with a kernel of dimension 24. In Paper 5 we investigate when it is possible to map two perfect codes on each other using a non-singular linear map. In Paper 6 we give an invariant for the equivalence classes of all perfect codes of all lengths when linear equivalence is considered. In Paper 7 we give an invariant for the equivalence classes of all perfect codes of all lengths when extended equivalence is considered. In Paper 8 we define a class of perfect codes that we call FRH-codes. It is shown that each FRH-code is linearly equivalent to a so called Phelps code and that this class contains Phelps codes as a proper subset.
APA, Harvard, Vancouver, ISO, and other styles
27

Fyn-Sydney, Betty Iboroma. "Phan geometries and error correcting codes." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4433/.

Full text
Abstract:
In this thesis, we define codes based on the Phan geometry of type An. We show that the action of the group SUn+1(q) is not irreducible on the code. In the rank two case, we prove that the code is spanned by those apartments which only consist of chambers belonging to the Phan geometry and obtain submodules for the code.
APA, Harvard, Vancouver, ISO, and other styles
28

Guruswami, Venkatesan 1976. "List decoding of error-correcting codes." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8700.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 303-315).
Error-correcting codes are combinatorial objects designed to cope with the problem of reliable transmission of information on a noisy channel. A fundamental algorithmic challenge in coding theory and practice is to efficiently decode the original transmitted message even when a few symbols of the received word are in error. The naive search algorithm runs in exponential time, and several classical polynomial time decoding algorithms are known for specific code families. Traditionally, however, these algorithms have been constrained to output a unique codeword. Thus they faced a "combinatorial barrier" and could only correct up to d/2 errors, where d is the minimum distance of the code. An alternate notion of decoding called list decoding, proposed independently by Elias and Wozencraft in the late 50s, allows the decoder to output a list of all codewords that differ from the received word in a certain number of positions. Even when constrained to output a relatively small number of answers, list decoding permits recovery from errors well beyond the d/2 barrier, and opens up the possibility of meaningful error-correction from large amounts of noise. However, for nearly four decades after its conception, this potential of list decoding was largely untapped due to the lack of efficient algorithms to list decode beyond d/2 errors for useful families of codes. This thesis presents a detailed investigation of list decoding, and proves its potential, feasibility, and importance as a combinatorial and algorithmic concept.
(cont.) We prove several combinatorial results that sharpen our understanding of the potential and limits of list decoding, and its relation to more classical parameters like the rate and minimum distance. The crux of the thesis is its algorithmic results, which were lacking in the early works on list decoding. Our algorithmic results include: * Efficient list decoding algorithms for classically studied codes such as Reed-Solomon codes and algebraic-geometric codes. In particular, building upon an earlier algorithm due to Sudan, we present the first polynomial time algorithm to decode Reed-Solomon codes beyond d/2 errors for every value of the rate. * A new soft list decoding algorithm for Reed-Solomon and algebraic-geometric codes, and novel decoding algorithms for concatenated codes based on it. * New code constructions using concatenation and/or expander graphs that have good (and sometimes near-optimal) rate and are efficiently list decodable from extremely large amounts of noise. * Expander-based constructions of linear time encodable and decodable codes that can correct up to the maximum possible fraction of errors, using unique (not list) decoding.
by Venkatesan Guruswami.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
29

Guo, Alan Xinyu. "New error correcting codes from lifting." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99776.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 117-121).
Error correcting codes have been widely used for protecting information from noise. The theory of error correcting codes studies the range of parameters achievable by such codes, as well as the efficiency with which one can encode and decode them. In recent years, attention has focused on the study of sublinear-time algorithms for various classical problems, such as decoding and membership verification. This attention was driven in part by theoretical developments in probabilistically checkable proofs (PCPs) and hardness of approximation. Locally testable codes (codes for which membership can be verified using a sublinear number of queries) form the combinatorial core of PCP constructions and thus play a central role in computational complexity theory. Historically, low-degree polynomials (the Reed-Muller code) have been the locally testable code of choice. Recently, "affine-invariant" codes have come under focus as providing potential for new and improved codes. In this thesis, we exploit a natural algebraic operation known as "lifting" to construct new affine-invariant codes from shorter base codes. These lifted codes generically possess desirable combinatorial and algorithmic properties. The lifting operation preserves the distance of the base code. Moreover, lifted codes are naturally locally decodable and testable. We tap deeper into the potential of lifted codes by constructing the "lifted Reed-Solomon code", a supercode of the Reed-Muller code with the same error-correcting capabilities yet vastly greater rate. The lifted Reed-Solomon code is the first high-rate code known to be locally decodable up to half the minimum distance, locally list-decodable up to the Johnson bound, and robustly testable, with robustness that depends only on the distance of the code. In particular, it is the first high-rate code known to be both locally decodable and locally testable. We also apply the lifted Reed-Solomon code to obtain new bounds on the size of Nikodym sets, and also to show that the Reed-Muller code is robustly testable for all field sizes and degrees up to the field size, with robustness that depends only on the distance of the code.
by Alan Xinyu Guo.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Vicente, Renato. "Statistical physics of error-correcting codes." Thesis, Aston University, 2000. http://publications.aston.ac.uk/10608/.

Full text
Abstract:
In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.
APA, Harvard, Vancouver, ISO, and other styles
31

Erxleben, Wayne Henry 1963. "Error-correcting two-dimensional modulation codes." Thesis, The University of Arizona, 1993. http://hdl.handle.net/10150/291577.

Full text
Abstract:
Modulation coding, to limit the number of consecutive zeroes in a data stream, is essential in digital magnetic recording/playback systems. Additionally, such systems require error correction coding to ensure that the decoded output matches the recorder input, even if noise is present. Typically these two coding steps have been performed independently, although various methods of combining them into one step have recently appeared. Another recent development is two-dimensional modulation codes, which meet runlength constraints using several parallel recording tracks, significantly increasing channel capacity. This thesis combines these two ideas. Previous techniques (both block and trellis structures) for combining error correction and modulation coding are surveyed, with discussion of their applicability in the two-dimensional case. One approach, based on trellis-coded modulation, is explored in detail, and a class of codes developed which exploit the increased capacity to achieve good error-correcting ability at the same rate as common non-error-correcting one-dimensional codes.
APA, Harvard, Vancouver, ISO, and other styles
32

Joseph, Binoy. "Clustering For Designing Error Correcting Codes." Thesis, Indian Institute of Science, 1994. https://etd.iisc.ac.in/handle/2005/3915.

Full text
Abstract:
In this thesis we address the problem of designing codes for specific applications. To do so we make use of the relationship between clusters and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. In literature we have a number of algorithms available for clustering. We have examined the performance of a number of such algorithms, such as Linde-Buzo-Gray, Simulated Annealing, Simulated Annealing with Linde-Buzo-Gray, Deterministic Annealing, etc, for design of codes. But all these algorithms make use of the Eucledian squared error distance measure for clustering. This distance measure does not match with the distance measure of interest in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure. Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to produce optimum codes due to very slow convergence near the end. As a remedy, we have proposed a modification based on the code structure, for such algorithms for code design which makes it possible to converge to the optimum codes.
APA, Harvard, Vancouver, ISO, and other styles
33

Joseph, Binoy. "Clustering For Designing Error Correcting Codes." Thesis, Indian Institute of Science, 1994. http://hdl.handle.net/2005/66.

Full text
Abstract:
In this thesis we address the problem of designing codes for specific applications. To do so we make use of the relationship between clusters and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. In literature we have a number of algorithms available for clustering. We have examined the performance of a number of such algorithms, such as Linde-Buzo-Gray, Simulated Annealing, Simulated Annealing with Linde-Buzo-Gray, Deterministic Annealing, etc, for design of codes. But all these algorithms make use of the Eucledian squared error distance measure for clustering. This distance measure does not match with the distance measure of interest in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure. Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to produce optimum codes due to very slow convergence near the end. As a remedy, we have proposed a modification based on the code structure, for such algorithms for code design which makes it possible to converge to the optimum codes.
APA, Harvard, Vancouver, ISO, and other styles
34

Oliveira, Jaquinei de. "Roteamento em redes tolerantes a atrasos: intensificação versus exploração no processo de busca por melhores caminhos." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1374.

Full text
Abstract:
Voltado para Redes Tolerantes a Atrasos, o protocolo de roteamento Cultural GrAnt (CGrAnt) utiliza Otimização por Colônia de Formigas para representar o espaço populacional de um Algoritmo Cultural. O protocolo CGrAnt emprega diferentes componentes de conhecimento de modo a explorar as características da rede e melhorar o encaminhamento de mensagens: Domínio, Situacional e Histórico. O conhecimento de Domino exerce uma função central na operação do CGrAnt uma vez que ele influencia os conhecimentos Situacional e Hist ́rico, determinando se um n ́ deve explorar (através da seleção de novos encaminhadores de mensagens) ou intensificar (através da seleção de encaminhadores promissores previamente encontrados) o espaço de busca. Através do uso de uma m ́trica especıfica que analisa a dinâmica local da mobilidade dos, o conhecimento de Dom ́ determina o status da busca por caminhos (explora ̧ao ou interino ossificação). O uso dessa m ́trica pode induzir a falso-positivos ou falso-negativos quando o protocolo CGrAnt determina a qualidade de um n ́ como encaminhador de mensagens. De modo a mitigar as limitações da m ́trica original do CGrAnt, este trabalho propõe três metricas alternativas para o conhecimento de Domínio do CGrAnt. As m ́tricas propostas abordam aspectos da rede que n ̃o s ̃o contemplados pela abordagem utilizada pela métrica original. Os resultados mostram que as métricas propostas melhoram o desempenho do protocolo CGrAnt uma vez que apresentam redução na rela ̧ao de redundância de mensagens para todos os cenários de simula ̧ao utilizados e aumentam a taxa de entrega de mensagens em dois dos cenários utilizados.
Designed to Delay Tolerant Networks (DTNs), the Cultural GrAnt (CGrAnt) routing protocol uses Ant Colony Optimization metaheuristic to represent the population space of a Cultural Algorithm. The CGrAnt employs distinct components knowledge in order to explore the network characteristics and improve the message forwarding: Domain, Situational, and History. Domain knowledge plays a central role in the protocol operation as it influences the History and Situational knowledge, by determining if a node must explore (through the selection of new message forwarders) or exploit (through the selection of previously found message forwarders) the search space. By using a specific metric that analyzes the local dynamics of node mobility, the Domain knowledge can set the status of the path search (exploration or exploitation). The use of this metric can induce false- positives or false-negatives when the CGrAnt protocol evaluates the quality of a node as a message forwarder. In order to mitigate the limitations of the CGrAnt’s original metric, this work proposes three alternative metrics to the Domain knowledge of the CGrAnt. The proposed metrics cover aspects of the network which are not addressed by the original metric. Results show the new proposed metrics increase the CGrAnt performance as they achieve lower message redundancy ratio for all the scenarios considered and higher message delivery ratio for two scenarios.
APA, Harvard, Vancouver, ISO, and other styles
35

Hunt, Andrew W. "Hyper-codes, high-performance low-complexity error-correcting codes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0007/MQ32401.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Parvaresh, Farzad. "Algebraic list-decoding of error-correcting codes." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3244733.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed February 23, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 134-141).
APA, Harvard, Vancouver, ISO, and other styles
37

Tjhai, Cen Jung. "A study of linear error correcting codes." Thesis, University of Plymouth, 2007. http://hdl.handle.net/10026.1/1624.

Full text
Abstract:
Since Shannon's ground-breaking work in 1948, there have been two main development streams of channel coding in approaching the limit of communication channels, namely classical coding theory which aims at designing codes with large minimum Hamming distance and probabilistic coding which places the emphasis on low complexity probabilistic decoding using long codes built from simple constituent codes. This work presents some further investigations in these two channel coding development streams. Low-density parity-check (LDPC) codes form a class of capacity-approaching codes with sparse parity-check matrix and low-complexity decoder Two novel methods of constructing algebraic binary LDPC codes are presented. These methods are based on the theory of cyclotomic cosets, idempotents and Mattson-Solomon polynomials, and are complementary to each other. The two methods generate in addition to some new cyclic iteratively decodable codes, the well-known Euclidean and projective geometry codes. Their extension to non binary fields is shown to be straightforward. These algebraic cyclic LDPC codes, for short block lengths, converge considerably well under iterative decoding. It is also shown that for some of these codes, maximum likelihood performance may be achieved by a modified belief propagation decoder which uses a different subset of 7^ codewords of the dual code for each iteration. Following a property of the revolving-door combination generator, multi-threaded minimum Hamming distance computation algorithms are developed. Using these algorithms, the previously unknown, minimum Hamming distance of the quadratic residue code for prime 199 has been evaluated. In addition, the highest minimum Hamming distance attainable by all binary cyclic codes of odd lengths from 129 to 189 has been determined, and as many as 901 new binary linear codes which have higher minimum Hamming distance than the previously considered best known linear code have been found. It is shown that by exploiting the structure of circulant matrices, the number of codewords required, to compute the minimum Hamming distance and the number of codewords of a given Hamming weight of binary double-circulant codes based on primes, may be reduced. A means of independently verifying the exhaustively computed number of codewords of a given Hamming weight of these double-circulant codes is developed and in coiyunction with this, it is proved that some published results are incorrect and the correct weight spectra are presented. Moreover, it is shown that it is possible to estimate the minimum Hamming distance of this family of prime-based double-circulant codes. It is shown that linear codes may be efficiently decoded using the incremental correlation Dorsch algorithm. By extending this algorithm, a list decoder is derived and a novel, CRC-less error detection mechanism that offers much better throughput and performance than the conventional ORG scheme is described. Using the same method it is shown that the performance of conventional CRC scheme may be considerably enhanced. Error detection is an integral part of an incremental redundancy communications system and it is shown that sequences of good error correction codes, suitable for use in incremental redundancy communications systems may be obtained using the Constructions X and XX. Examples are given and their performances presented in comparison to conventional CRC schemes.
APA, Harvard, Vancouver, ISO, and other styles
38

Feldman, Jon 1975. "Decoding error-correcting codes via linear programming." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/42831.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 147-151).
Error-correcting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up to the full error-correcting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel. In this thesis we investigate the application of linear programming (LP) relaxation to the problem of decoding an error-correcting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new "LP decoders" have tight combinatorial characterizations of decoding success that can be used to analyze error-correcting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the channel. We refer to this property as the ML certificate property. We provide specific LP decoders for two major families of codes: turbo codes and low-density parity-check (LDPC) codes. These codes have received a great deal of attention recently due to their unprecedented error-correcting performance.
(cont.) Our decoder is particularly attractive for analysis of these codes because the standard message-passing algorithms used for decoding are often difficult to analyze. For turbo codes, we give a relaxation very close to min-cost flow, and show that the success of the decoder depends on the costs in a certain residual graph. For the case of rate-1/2 repeat-accumulate codes (a certain type of turbo code), we give an inverse polynomial upper bound on the probability of decoding failure. For LDPC codes (or any binary linear code), we give a relaxation based on the factor graph representation of the code. We introduce the concept of fractional distance, which is a function of the relaxation, and show that LP decoding always corrects a number of errors up to half the fractional distance. We show that the fractional distance is exponential in the girth of the factor graph. Furthermore, we give an efficient algorithm to compute this fractional distance. We provide experiments showing that the performance of our decoders are comparable to the standard message-passing decoders. We also give new provably convergent message-passing decoders based on linear programming duality that have the ML certificate property.
by Jon Feldman.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
39

Kalyanaraman, Shankar Umans Christopher. "On obtaining pseudorandomness from error-correcting codes /." Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-06022006-170858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lyle, Suzanne McLean. "Error Correcting Codes and the Human Genome." Digital Commons @ East Tennessee State University, 2010. https://dc.etsu.edu/etd/1689.

Full text
Abstract:
In this work, we study error correcting codes and generalize the concepts with a view toward a novel application in the study of DNA sequences. The author investigates the possibility that an error correcting linear code could be included in the human genome through application and research. The author finds that while it is an accepted hypothesis that it is reasonable that some kind of error correcting code is used in DNA, no one has actually been able to identify one. The author uses the application to illustrate how the subject of coding theory can provide a teaching enrichment activity for undergraduate mathematics.
APA, Harvard, Vancouver, ISO, and other styles
41

Harrington, James William Preskill John P. "Analysis of quantum error-correcting codes : symplectic lattice codes and toric codes /." Diss., Pasadena, Calif. : California Institute of Technology, 2004. http://resolver.caltech.edu/CaltechETD:etd-05122004-113132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pfennig, Stefan, and Elke Franz. "Comparison of Different Secure Network Coding Paradigms Concerning Transmission Efficiency." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-145096.

Full text
Abstract:
Preventing the success of active attacks is of essential importance for network coding since even the infiltration of one single corrupted data packet can jam large parts of the network. The existing approaches for network coding schemes preventing such pollution attacks can be divided into two categories: utilize cryptographic approaches or utilize redundancy similar to error correction coding. Within this paper, we compared both paradigms concerning efficiency of data transmission under various circumstances. Particularly, we considered an attacker of a certain strength as well as the influence of the generation size. The results are helpful for selecting a suitable approach for network coding taking into account both security against pollution attacks and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
43

Pfennig, Stefan, and Elke Franz. "Comparison of Different Secure Network Coding Paradigms Concerning Transmission Efficiency." Technische Universität Dresden, 2013. https://tud.qucosa.de/id/qucosa%3A28134.

Full text
Abstract:
Preventing the success of active attacks is of essential importance for network coding since even the infiltration of one single corrupted data packet can jam large parts of the network. The existing approaches for network coding schemes preventing such pollution attacks can be divided into two categories: utilize cryptographic approaches or utilize redundancy similar to error correction coding. Within this paper, we compared both paradigms concerning efficiency of data transmission under various circumstances. Particularly, we considered an attacker of a certain strength as well as the influence of the generation size. The results are helpful for selecting a suitable approach for network coding taking into account both security against pollution attacks and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
44

Albini, Fábio Luiz Pessoa. "PTTA: protocolo para distribuição de conteúdo em redes tolerantes ao atraso e desconexões." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/689.

Full text
Abstract:
O presente trabalho consiste na proposta de um novo protocolo de transporte para redes tolerantes a atrasos e desconexões (DTN - Delay Tolerant Network) chamado PTTA - Protocolo de Transporte Tolerante a Atrasos (em inglês - DTTP - Delay Tolerant Transport Protocol). Este protocolo tem o objetivo de oferecer uma confiabilidade estatística na entrega das informações em redes deste tipo. Para isso, serão utilizados Códigos Fontanais como técnica de correção de erros. Os resultados mostram as vantagens da utilização do PTTA. Este trabalho ainda propõe um mecanismo de controle da fonte adaptável para o PTTA a fim de limitar a quantidade de dados gerados pela origem (fonte). O esquema proposto almeja aumentar a diversidade das informações codificadas sem o aumento da carga na rede. Para atingir este objetivo o intervalo de geração e o TTL (Time To Live - Tempo de vida) das mensagens serão manipulados com base em algumas métricas da rede. A fim de validar a eficiência do mecanismo proposto, diferentes cenários foram testados utilizando os principais protocolos de roteamento para DTNs. Os resultados de desempenho foram obtidos levando em consideração o tamanho do buffer, o TTL das mensagens e a quantidade de informação redundante gerada na rede. Os resultados de simulações obtidos através do simulador ONE mostram que nos cenários avaliados, o PTTA alcança um aumento na taxa de entrega das informações em um menor tempo, quando comparado com outro protocolo de transporte sem confirmação, permitindo assim um ganho de desempenho na rede.
The present work consists in the proposal of a new transport protocol for delay tolerant networks and disconnections (DTN - Delay Tolerant Network) called DTTP - Delay Tolerant Transport Protocol (in portuguese – PTTA - Protocolo de Transporte Tolerante a Atrasos). This protocol aims to provide a statistical reliability in DTNs' information delivery. For this, we use fountain codes as error correction technique. The results show the advantages of using DTTP. This work also proposes an adaptive control mechanism for the DTTP source to limit the amount of generated data. The proposed scheme aims at increasing the diversity of encoded information without increasing the load on the network. To achieve this goal the messages generation interval and TTL (Time To Live) will be handled based on some network metrics. In order to validate the efficiency of the proposed mechanism, different scenarios will be tested using the main routing protocols for DTNs. The performance results were obtained taking into account the buffer size, messages TTL and the amount of redundant information generated on the network. The simulation results, obtained through The ONE simulator, show that in the evaluated scenarios PTTA achieves an increase in the information delivery rate in a shorter time compared to other transport protocol for confirmation, thus allowing a gain in the network performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Spielman, Daniel Alan. "Computationally efficient error-correcting codes and holographic proofs." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tingxian, Zhou, HOU LIKUN, and XU BINGXING. "The Error-Correcting Codes of The m-Sequence." International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613419.

Full text
Abstract:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The paper analyses the properties of m-sequence error-correcting codes when adapting the correlation detection decoding method, deduces the error-tolerant number formula of binary sequence with a good auto-correlation property being used as error-correcting codes, provides with a method to increase the efficiency of the m-sequence error-correcting codes and make its coding and decoding procedures in the form of framed figures.
APA, Harvard, Vancouver, ISO, and other styles
47

Hieta-aho, Erik. "On Finite Rings, Algebras, and Error-Correcting Codes." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1525182104493243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Corazza, Federico Augusto. "Analysis of graph-based quantum error-correcting codes." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23801/.

Full text
Abstract:
With the advent of quantum computers, there has been a growing interest in the practicality of this device. Due to the delicate conditions that surround physical qubits, one could wonder whether any useful computation could be implemented on such devices. As we describe in this work, it is possible to exploit concepts from classical information theory and employ quantum error-correcting techniques. Thanks to the Threshold Theorem, if the error probability of physical qubits is below a given threshold, then the logical error probability corresponding to the encoded data qubit can be arbitrarily low. To this end, we describe decoherence which is the phenomenon that quantum bits are subject to and is the main source of errors in quantum memories. From the cause of error of a single qubit, we then introduce the error models that can be used to analyze quantum error-correcting codes as a whole. The main type of code that we studied comes from the family of topological codes and is called surface code. Of these codes, we consider both the toric and planar structures. We then introduce a variation of the standard planar surface code which better captures the symmetries of the code architecture. Once the main properties of surface codes have been discussed, we give an overview of the working principles of the algorithm used to decode this type of topological code: the minimum weight perfect matching. Finally, we show the performance of the surface codes that we introduced, comparing them based on their architecture and properties. These simulations have been performed with different error channel models to give a more thorough description of their performance in several situations showing relevant results.
APA, Harvard, Vancouver, ISO, and other styles
49

Rodrigues, Luís Filipe Abade. "Error correcting codes for visible light communication systems." Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/15887.

Full text
Abstract:
Mestrado em Engenharia Eletrónica e Telecomunicações
Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.
Ao longo dos últimos anos o número de utilizadores de redes sem fios tem aumentado. Até ao momento, a tecnologia RF (Radio Frequência) dominado este segmento. No entanto, a saturação nessa região do espectro eletromagnético exige tecnologias alternativas para redes sem fios. Recentemente, com o crescimento do mercado da iluminação LED (Díodo Emissor de Luz), as Comunicações por Luz Visível têm atraído as atenções dos investigadores. Em primeiro lugar, é uma fonte de luz eficiente para aplicações de iluminação. Em segundo lugar, o LED é um dispositivo que é facilmente modulado e com grande largura de banda. Por último, permite combinar iluminação e comunicação no mesmo dispositivo, ou seja, permite a implementação de sistemas de comunicação sem fios altamente eficientes. Um dos aspetos mais importantes num sistema de comunicação é a sua fiabilidade quando sujeitos a canais com ruído. Nestes cenários, a informação recebida pode vir afetada de erros. Para garantir o correto funcionamento do sistema, é muito comum o uso de um codificador de canal. A sua função é codificar a informação a ser enviada para melhorar a performance do sistema. O uso de Códigos de Correção de Erros é muito frequente permitindo anexar informação redundante aos dados originais. No recetor, a informação redundante é usada para recuperar possíveis erros na transmissão. Esta dissertação apresenta os passos da implementação de um Codificador de Canal para VLC. Foram consideradas várias técnicas tais como os códigos Reed-Solomon e os códigos Convolucionais, Interleaving (Bloco e Convolucional), CRC e Puncturing. Foi efetuada uma análise das características de cada técnica a fim de avaliar quais as mais apropriadas para o cenário em questão. Numa primeira fase, vários modelos foram implementados em Simulink a fim de simular o comportamento dos mesmos em diferentes cenários. Mais tarde os modelos foram implementados e simulados em blocos de hardware. Para obter resultados de uma forma mais rápida, foram elaborados modelos de co-simulação em hardware. No final, diferentes técnicas foram combinadas para criar um Codificador de Canal capaz de detetar e corrigir erros aleatórios e em rajada, graças ao uso de códigos Reed-Solomon em conjunto com técnicas de Interleaving. Adicionalmente, usando o algoritmo CRC, após o processo de descodficação, o sistema proposto é capaz de identificar possíveis erros que não puderam ser corrigidos.
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Winnie Carleton University Dissertation Engineering Systems and Computer. "Generalised linear anticodes and optimum error-correcting codes." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography