To see the other types of publications on this topic, follow the link: Error-correcting codes (Information theory) Radio.

Journal articles on the topic 'Error-correcting codes (Information theory) Radio'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 journal articles for your research on the topic 'Error-correcting codes (Information theory) Radio.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Conway, J., and N. Sloane. "Lexicographic codes: Error-correcting codes from game theory." IEEE Transactions on Information Theory 32, no. 3 (May 1986): 337–48. http://dx.doi.org/10.1109/tit.1986.1057187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Curto, Carina, Vladimir Itskov, Katherine Morrison, Zachary Roth, and Judy L. Walker. "Combinatorial Neural Codes from a Mathematical Coding Theory Perspective." Neural Computation 25, no. 7 (July 2013): 1891–925. http://dx.doi.org/10.1162/neco_a_00459.

Full text
Abstract:
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Pengfei, Yi Liu, Xiaojie Zhang, Paul H. Siegel, and Erich F. Haratsch. "Syndrome-Coupled Rate-Compatible Error-Correcting Codes: Theory and Application." IEEE Transactions on Information Theory 66, no. 4 (April 2020): 2311–30. http://dx.doi.org/10.1109/tit.2020.2966439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Namba, Kazuteru, and Eiji Fujiwara. "Nonbinary single-symbol error correcting, adjacent two-symbol transposition error correcting codes over integer rings." Systems and Computers in Japan 38, no. 8 (2007): 54–60. http://dx.doi.org/10.1002/scj.10516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ben-Gal, Irad, and Lev B. Levitin. "An application of information theory and error-correcting codes to fractional factorial experiments." Journal of Statistical Planning and Inference 92, no. 1-2 (January 2001): 267–82. http://dx.doi.org/10.1016/s0378-3758(00)00165-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Namba, Kazuteru, and Eiji Fujiwara. "A class of systematicm-ary single-symbol error correcting codes." Systems and Computers in Japan 32, no. 6 (2001): 21–28. http://dx.doi.org/10.1002/scj.1030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shimada, Ryosaku, Ryutaro Murakami, Kazuharu Sono, and Yoshiteru Ohkura. "Arithmetic burst error correcting fire-type cyclic ST-AN codes." Systems and Computers in Japan 18, no. 7 (1987): 57–68. http://dx.doi.org/10.1002/scj.4690180706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Haselgrove, H. L., and P. P. Rohde. "Trade-off between the tolerance of located and unlocated errors in nondegenrate quantum." Quantum Information and Computation 8, no. 5 (May 2008): 399–410. http://dx.doi.org/10.26421/qic8.5-3.

Full text
Abstract:
In a recent study [Rohde et al., quant-ph/0603130 (2006)] of several quantum error correcting protocols designed for tolerance against qubit loss, it was shown that these protocols have the undesirable effect of magnifying the effects of depolarization noise. This raises the question of which general properties of quantum error-correcting codes might explain such an apparent trade-off between tolerance to located and unlocated error types. We extend the counting argument behind the well-known quantum Hamming bound to derive a bound on the weights of combinations of located and unlocated errors which are correctable by nondegenerate quantum codes. Numerical results show that the bound gives an excellent prediction to which combinations of unlocated and located errors can be corrected {\em with high probability} by certain large degenerate codes. The numerical results are explained partly by showing that the generalized bound, like the original, is closely connected to the information-theoretic quantity the {\em quantum coherent information}. However, we also show that as a measure of the exact performance of quantum codes, our generalized Hamming bound is provably far from tight.
APA, Harvard, Vancouver, ISO, and other styles
9

He, Xianmang. "Constructing new q-ary quantum MDS codes with distances bigger than q/2 from generator matrices." Quantum Information and Computation 18, no. 3&4 (March 2018): 223–30. http://dx.doi.org/10.26421/qic18.3-4-3.

Full text
Abstract:
The construction of quantum error-correcting codes has been an active field of quantum information theory since the publication of \cite{Shor1995Scheme,Steane1998Enlargement,Laflamme1996Perfect}. It is becoming more and more difficult to construct some new quantum MDS codes with large minimum distance. In this paper, based on the approach developed in the paper \cite{NewHeMDS2016}, we construct several new classes of quantum MDS codes. The quantum MDS codes exhibited here have not been constructed before and the distance parameters are bigger than q/2.
APA, Harvard, Vancouver, ISO, and other styles
10

Kuznetsov, Alexandr, Oleg Oleshko, and Kateryna Kuznetsova. "ENERGY GAIN FROM ERROR-CORRECTING CODING IN CHANNELS WITH GROUPING ERRORS." Acta Polytechnica 60, no. 1 (March 2, 2020): 65–72. http://dx.doi.org/10.14311/ap.2020.60.0065.

Full text
Abstract:
Abstract. This article explores the a mathematical model of the a data transmission channel with errors grouping. We propose an estimating method for energy gain from coding and energy efficiency of binary codes in channels with grouped errors. The proposed method uses a simplified Bennet and Froelich’s model and allows leading the research of the energy gain from coding for a wide class of data channels without restricting the way of the length distributing the error bursts. The reliability of the obtained results is confirmed by the information of the known results in the theory of error-correcting coding in the simplified variant.
APA, Harvard, Vancouver, ISO, and other styles
11

Mei, Fan, Hong Chen, and Yingke Lei. "Blind Recognition of Forward Error Correction Codes Based on a Depth Distribution Algorithm." Symmetry 13, no. 6 (June 21, 2021): 1094. http://dx.doi.org/10.3390/sym13061094.

Full text
Abstract:
Forward error correction codes (FEC) are one of the vital sections of modern communication systems; therefore, recognition of the coding type is an important issue in non-cooperative communication. At present, the recognition of FEC codes is mainly concentrated in the field of semi-blind identification with known types of codes. However, based on information asymmetry, the receiver cannot know the types of channel coding previously used in non-cooperative systems such as cognitive radio and remote sensing of communication. Therefore, it is important to recognize the error-correcting encoding type with no prior information. Although the traditional algorithm can also recognize the type of codes, it is only applicable to the case without errors, and its practicability is poor. In the paper, we propose a new method to identify the types of FEC codes based on depth distribution in non-cooperative communication. The proposed algorithm can effectively recognize linear block codes, convolutional codes, and Turbo codes under a low error probability level, and has a higher robustness to noise transmission environment. In addition, an improved matrix estimation algorithm based on Gaussian elimination was adopted in this paper, which effectively improves the parameter identification in a noisy environment. Finally, we used a general framework to unify all the reconstruction algorithms to simplify the complexity of the algorithm. The simulation results show that, compared with the traditional algorithm based on matrix rank, the proposed algorithm has a better anti-interference performance. The method proposed is simple and convenient for engineering and practical applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Mundici, Daniele. "Ulam Games, Łukasiewicz Logic, and AF C*-Algebras." Fundamenta Informaticae 18, no. 2-4 (April 1, 1993): 151–61. http://dx.doi.org/10.3233/fi-1993-182-405.

Full text
Abstract:
Ulam asked what is the minimum number of yes-no questions necessary to find an unknown number in the search space (1, …, 2n), if up to l of the answers may be erroneous. The solutions to this problem provide optimal adaptive l error correcting codes. Traditional, nonadaptive l error correcting codes correspond to the particular case when all questions are formulated before all answers. We show that answers in Ulam’s game obey the (l+2)-valued logic of Łukasiewicz. Since approximately finite-dimensional (AF) C*-algebras can be interpreted in the infinite-valued sentential calculus, we discuss the relationship between game-theoretic notions and their C*-algebraic counterparts. We describe the correspondence between continuous trace AF C*-algebras, and Ulam games with separable Boolean search space S. whose questions are the clopen subspaces of S. We also show that these games correspond to finite products of countable Post MV algebras, as well as to countable lattice-ordered Specker groups with strong unit.
APA, Harvard, Vancouver, ISO, and other styles
13

Mei, Fan, Hong Chen, and Yingke Lei. "Blind Recognition of Forward Error Correction Codes Based on Recurrent Neural Network." Sensors 21, no. 11 (June 4, 2021): 3884. http://dx.doi.org/10.3390/s21113884.

Full text
Abstract:
Forward error correction coding is the most common way of channel coding and the key point of error correction coding. Therefore, the recognition of which coding type is an important issue in non-cooperative communication. At present, the recognition of FEC codes is mainly concentrated in the field of semi-blind identification with known types of codes. However, the receiver cannot know the types of channel coding previously in non-cooperative systems such as cognitive radio and remote sensing of communication. Therefore, it is important to recognize the error-correcting encoding type with no prior information. In the paper, we come up with a neoteric method to identify the types of FEC codes based on Recurrent Neural Network (RNN) under the condition of non-cooperative communication. The algorithm classifies the input data into Bose-Chaudhuri-Hocquenghem (BCH) codes, Low-density Parity-check (LDPC) codes, Turbo codes and convolutional codes. So as to train the RNN model with better performance, the weight initialization method is optimized and the network performance is improved. The experimental result indicates that the average recognition rate of this model is 99% when the signal-to-noise ratio (SNR) ranges from 0 dB to 10 dB, which is in line with the requirements of engineering practice under the condition of non-cooperative communication. Moreover, the comparison of different parameters and models show the effectiveness and practicability of the algorithm proposed.
APA, Harvard, Vancouver, ISO, and other styles
14

Magdalena de la Fuente, Julio Carlos, Nicolas Tarantino, and Jens Eisert. "Non-Pauli topological stabilizer codes from twisted quantum doubles." Quantum 5 (February 17, 2021): 398. http://dx.doi.org/10.22331/q-2021-02-17-398.

Full text
Abstract:
It has long been known that long-ranged entangled topological phases can be exploited to protect quantum information against unwanted local errors. Indeed, conditions for intrinsic topological order are reminiscent of criteria for faithful quantum error correction. At the same time, the promise of using general topological orders for practical error correction remains largely unfulfilled to date. In this work, we significantly contribute to establishing such a connection by showing that Abelian twisted quantum double models can be used for quantum error correction. By exploiting the group cohomological data sitting at the heart of these lattice models, we transmute the terms of these Hamiltonians into full-rank, pairwise commuting operators, defining commuting stabilizers. The resulting codes are defined by non-Pauli commuting stabilizers, with local systems that can either be qubits or higher dimensional quantum systems. Thus, this work establishes a new connection between condensed matter physics and quantum information theory, and constructs tools to systematically devise new topological quantum error correcting codes beyond toric or surface code models.
APA, Harvard, Vancouver, ISO, and other styles
15

Imai, H., J. Mueller-Quade, A. C. A. Nascimento, P. Tuyls, and A. Winter. "An information theoretical model for quantum secret sharing schemes." Quantum Information and Computation 5, no. 1 (January 2005): 68–79. http://dx.doi.org/10.26421/qic5.1-7.

Full text
Abstract:
Similarly to earlier models for quantum error correcting codes, we introduce a quantum information theoretical model for quantum secret sharing schemes. This model provides new insights into the theory of quantum secret sharing. By using our model, among other results, we give a shorter proof of Gottesman's theorem that the size of the shares in a quantum secret sharing scheme must be as large as the secret itself. Also, we introduced approximate quantum secret sharing schemes and showed robustness of quantum secret sharing schemes by extending Gottesman's theorem to the approximate case.
APA, Harvard, Vancouver, ISO, and other styles
16

Maltiyar, Kaveri, and Deepti Malviya. "Polar Code: An Advanced Encoding And Decoding Architecture For Next Generation 5G Applications." International Journal on Recent and Innovation Trends in Computing and Communication 7, no. 5 (June 4, 2019): 26–29. http://dx.doi.org/10.17762/ijritcc.v7i5.5307.

Full text
Abstract:
Polar Codes become a new channel coding, which will be common to apply for next-generation wireless communication systems. Polar codes, introduced by Arikan, achieves the capacity of symmetric channels with “low encoding and decoding complexity” for a large class of underlying channels. Recently, polar code has become the most favorable error correcting code in the viewpoint of information theory due to its property of channel achieving capacity. Polar code achieves the capacity of the class of symmetric binary memory less channels. In this paper review of polar code, an advanced encoding and decoding architecture for next generation applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Klimo, Martin, Peter Lukáč, and Peter Tarábek. "Deep Neural Networks Classification via Binary Error-Detecting Output Codes." Applied Sciences 11, no. 8 (April 15, 2021): 3563. http://dx.doi.org/10.3390/app11083563.

Full text
Abstract:
One-hot encoding is the prevalent method used in neural networks to represent multi-class categorical data. Its success stems from its ease of use and interpretability as a probability distribution when accompanied by a softmax activation function. However, one-hot encoding leads to very high dimensional vector representations when the categorical data’s cardinality is high. The Hamming distance in one-hot encoding is equal to two from the coding theory perspective, which does not allow detection or error-correcting capabilities. Binary coding provides more possibilities for encoding categorical data into the output codes, which mitigates the limitations of the one-hot encoding mentioned above. We propose a novel method based on Zadeh fuzzy logic to train binary output codes holistically. We study linear block codes for their possibility of separating class information from the checksum part of the codeword, showing their ability not only to detect recognition errors by calculating non-zero syndrome, but also to evaluate the truth-value of the decision. Experimental results show that the proposed approach achieves similar results as one-hot encoding with a softmax function in terms of accuracy, reliability, and out-of-distribution performance. It suggests a good foundation for future applications, mainly classification tasks with a high number of classes.
APA, Harvard, Vancouver, ISO, and other styles
18

Semerenko, Vasyl, and Oleksandr Voinalovich. "The simplification of computationals in error correction coding." Technology audit and production reserves 3, no. 2(59) (June 30, 2021): 24–28. http://dx.doi.org/10.15587/2706-5448.2021.233656.

Full text
Abstract:
The object of research is the processes of error correction transformation of information in automated systems. The research is aimed at reducing the complexity of decoding cyclic codes by combining modern mathematical models and practical tools. The main prerequisite for the complication of computations in deterministic linear error-correcting codes is the use of the algebraic representation as the main mathematical apparatus for these types of codes. Despite the universalism of the algebraic approach, its main drawback is the impossibility of taking into account the characteristic features of all subclasses of linear codes. In particular, the cyclic property is not taken into account at all for cyclic codes. Taking this property into account, one can go to a fundamentally different mathematical representation of cyclic codes – the theory of linear automata in Galois fields (linear finite-state machine). For the automaton representation of cyclic codes, it is proved that the problem of syndromic decoding of these codes in the general case is an NP-complete problem. However, if to use the proposed hierarchical approach to problems of complexity, then on its basis it is possible to carry out a more accurate analysis of the growth of computational complexity. Correction of single errors during one time interval (one iteration) of decoding has a linear decoding complexity on the length of the codeword, and error correction during m iterations of permutations of codeword bits has a polynomial complexity. According to three subclasses of cyclic codes, depending on the complexity of their decoding: easy decoding (linear complexity), iteratively decoded (polynomial complexity), complicate decoding (exponential complexity). Practical ways to reduce the complexity of computations are considered: alternate use of probabilistic and deterministic linear codes, simplification of software and hardware implementation by increasing the decoding time, use of interleaving. A method of interleaving is proposed, which makes it possible to simultaneously generate the burst errors and replace them with single errors. The mathematical apparatus of linear automata allows solving together the indicated problems of error correction coding.
APA, Harvard, Vancouver, ISO, and other styles
19

Gueye, Ibrahima, Ibra Dioum, Idy Diop, K. Wane Keita, Papis Ndiaye, Moussa Diallo, and Sidi Mohamed Farssi. "Performance of Hybrid RF/FSO Cooperative Systems Based on Quasicyclic LDPC Codes and Space-Coupled LDPC Codes." Wireless Communications and Mobile Computing 2020 (December 30, 2020): 1–15. http://dx.doi.org/10.1155/2020/8814588.

Full text
Abstract:
Free space optical (FSO) communication systems provide wireless line of sight connectivity in the unlicensed spectrum, and wireless optical communication achieves higher data rates compared to their radio frequency (RF) counterparts. FSO systems are particularly attractive for last mile access problem by bridging fiber optic backbone connectivity to RF access networks. To cope with this practical deployment scenario, there has been increasing attention to the so-called dual-hop (RF/FSO) systems where RF transmission is used at a hop followed by FSO transmission to another. In this article, we study the performance of cooperative transmission systems using a mixed RF-FSO DF (decode and forward) relay using error-correcting codes including QC-LDPC codes at the relay level. The FSO link is modeled by the gamma-gamma distribution, and the RF link is modeled by the Additive White Gaussian Noise (AWGN) model. Another innovation in this article is the use of cooperative systems using a mixed FSO/RF DF relay using quasicyclic low-density parity check (QC-LDPC) codes at the relay level. We also use the space-coupled low-density parity check (SC-LDPC) codes on the same scheme to show its importance in cooperative optical transmission but also in hybrid RF/FSO transmission. The latter will be compared with QC-LDPC codes. The use of mixed RF/FSO cooperative transmission systems can improve the reliability and transmission of information in networks. The results demonstrate an improvement in the performance of the cooperative RF/FSO DF system based on QC-LDPC and SC-LDPC codes compared to RF/FSO systems without the use of codes, but also to the DF systems proposed in the existing literature.
APA, Harvard, Vancouver, ISO, and other styles
20

Weaver, Nik. "Quantum Graphs as Quantum Relations." Journal of Geometric Analysis 31, no. 9 (January 13, 2021): 9090–112. http://dx.doi.org/10.1007/s12220-020-00578-w.

Full text
Abstract:
AbstractThe “noncommutative graphs” which arise in quantum error correction are a special case of the quantum relations introduced in Weaver (Quantum relations. Mem Am Math Soc 215(v–vi):81–140, 2012). We use this perspective to interpret the Knill–Laflamme error-correction conditions (Knill and Laflamme in Theory of quantum error-correcting codes. Phys Rev A 55:900-911, 1997) in terms of graph-theoretic independence, to give intrinsic characterizations of Stahlke’s noncommutative graph homomorphisms (Stahlke in Quantum zero-error source-channel coding and non-commutative graph theory. IEEE Trans Inf Theory 62:554–577, 2016) and Duan, Severini, and Winter’s noncommutative bipartite graphs (Duan et al., op. cit. in Zero-error communication via quantum channels, noncommutative graphs, and a quantum Lovász number. IEEE Trans Inf Theory 59:1164–1174, 2013), and to realize the noncommutative confusability graph associated to a quantum channel (Duan et al., op. cit. in Zero-error communication via quantum channels, noncommutative graphs, and a quantum Lovász number. IEEE Trans Inf Theory 59:1164–1174, 2013) as the pullback of a diagonal relation. Our framework includes as special cases not only purely classical and purely quantum information theory, but also the “mixed” setting which arises in quantum systems obeying superselection rules. Thus we are able to define noncommutative confusability graphs, give error correction conditions, and so on, for such systems. This could have practical value, as superselection constraints on information encoding can be physically realistic.
APA, Harvard, Vancouver, ISO, and other styles
21

Riznyk, V. V., D. Yu Skrybaylo-Leskiv, V. M. Badz, C. I. Hlod, V. V. Liakh, Y. M. Kulyk, N. B. Romanjuk, K. I. Tkachuk, and V. V. Ukrajinets. "COMPARATIVE ANALYSIS OF MONOLITHIC AND CYCLIC NOISE-PROTECTIVE CODES EFFECTIVENESS." Ukrainian Journal of Information Technology 3, no. 1 (2021): 99–105. http://dx.doi.org/10.23939/ujit2021.03.099.

Full text
Abstract:
Comparative analysis of the effectiveness of monolithic and cyclic noise protective codes built on "Ideal Ring Bundles" (IRBs) as the common theoretical basis for synthesis, researches and application of the codes for improving technical indexes of coding systems with respect to performance, reliability, transformation speed, and security has been realized. IRBs are cyclic sequences of positive integers, which form perfect partitions of a finite interval of integers. Sums of connected IRB elements enumerate the natural integers set exactly R-times. The IRB-codes both monolithic and cyclic ones forming on the underlying combinatorial constructions can be used for finding optimal solutions for configure of an applicable coding systems based on the common mathematical platform. The mathematical model of noise-protective data coding systems presents remarkable properties of harmonious developing real space. These properties allow configure codes with useful possibilities. First of them belong to the self-correcting codes due to monolithic arranged both symbols "1" and of course "0" of each allowed codeword. This allows you to automatically detect and correct errors by the monolithic structure of the encoded words. IRB codes of the second type provide improving noise protection of the codes by choosing the optimal ratio of information parameters. As a result of comparative analysis of cyclic IRB-codes based with optimized parameters and monolithic IRB-codes, it was found that optimized cyclic IRB codes have an advantage over monolithic in relation to a clearly fixed number of detected and corrected codes, while monolithic codes favorably differ in the speed of message decoding due to their inherent properties of self-correction and encryption. Monolithic code characterized by packing of the same name characters in the form of solid blocks. The latter are capable of encoding data on several levels at the same time, which expands the ability to encrypt and protect encoded data from unauthorized access. Evaluation of the effectiveness of coding optimization methods by speed of formation of coding systems, method power, and error correcting has been made. The model based on the combinatorial configurations contemporary theory, which can find a wide scientific field for the development of fundamental and applied researches into information technolodies, including application multidimensional models, as well as algorithms for synthesis of the underlying models.
APA, Harvard, Vancouver, ISO, and other styles
22

Baldi, Marco, Alessandro Barenghi, Franco Chiaraluce, Gerardo Pelosi, and Paolo Santini. "A Finite Regime Analysis of Information Set Decoding Algorithms." Algorithms 12, no. 10 (October 1, 2019): 209. http://dx.doi.org/10.3390/a12100209.

Full text
Abstract:
Decoding of random linear block codes has been long exploited as a computationally hard problem on which it is possible to build secure asymmetric cryptosystems. In particular, both correcting an error-affected codeword, and deriving the error vector corresponding to a given syndrome were proven to be equally difficult tasks. Since the pioneering work of Eugene Prange in the early 1960s, a significant research effort has been put into finding more efficient methods to solve the random code decoding problem through a family of algorithms known as information set decoding. The obtained improvements effectively reduce the overall complexity, which was shown to decrease asymptotically at each optimization, while remaining substantially exponential in the number of errors to be either found or corrected. In this work, we provide a comprehensive survey of the information set decoding techniques, providing finite regime temporal and spatial complexities for them. We exploit these formulas to assess the effectiveness of the asymptotic speedups obtained by the improved information set decoding techniques when working with code parameters relevant for cryptographic purposes. We also delineate computational complexities taking into account the achievable speedup via quantum computers and similarly assess such speedups in the finite regime. To provide practical grounding to the choice of cryptographically relevant parameters, we employ as our validation suite the ones chosen by cryptosystems admitted to the second round of the ongoing standardization initiative promoted by the US National Institute of Standards and Technology.
APA, Harvard, Vancouver, ISO, and other styles
23

ISOKAWA, TEIJIRO, FUKUTARO ABO, FERDINAND PEPER, SUSUMU ADACHI, JIA LEE, NOBUYUKI MATSUI, and SHINRO MASHIKO. "FAULT-TOLERANT NANOCOMPUTERS BASED ON ASYNCHRONOUS CELLULAR AUTOMATA." International Journal of Modern Physics C 15, no. 06 (July 2004): 893–915. http://dx.doi.org/10.1142/s0129183104006327.

Full text
Abstract:
Cellular Automata (CA) are a promising architecture for computers with nanometer-scale sized components, because their regular structure potentially allows chemical manufacturing techniques based on self-organization. With the increase in integration density, however, comes a decrease in the reliability of the components from which such computers will be built. This paper employs BCH error-correcting codes to construct CA with improved reliability. We construct an asynchronous CA of which a quarter of the (ternary) bits storing a cell's state information may be corrupted without affecting the CA's operations, provided errors are evenly distributed over a cell's bits (no burst errors allowed). Under the same condition, the corruption of half of a cell's bits can be detected.
APA, Harvard, Vancouver, ISO, and other styles
24

Kesselring, Markus S., Fernando Pastawski, Jens Eisert, and Benjamin J. Brown. "The boundaries and twist defects of the color code and their applications to topological quantum computation." Quantum 2 (October 19, 2018): 101. http://dx.doi.org/10.22331/q-2018-10-19-101.

Full text
Abstract:
The color code is both an interesting example of an exactly solved topologically ordered phase of matter and also among the most promising candidate models to realize fault-tolerant quantum computation with minimal resource overhead. The contributions of this work are threefold. First of all, we build upon the abstract theory of boundaries and domain walls of topological phases of matter to comprehensively catalog the objects realizable in color codes. Together with our classification we also provide lattice representations of these objects which include three new types of boundaries as well as a generating set for all 72 color code twist defects. Our work thus provides an explicit toy model that will help to better understand the abstract theory of domain walls. Secondly, we discover a number of interesting new applications of the cataloged objects for quantum information protocols. These include improved methods for performing quantum computations by code deformation, a new four-qubit error-detecting code, as well as families of new quantum error-correcting codes we call stellated color codes, which encode logical qubits at the same distance as the next best color code, but using approximately half the number of physical qubits. To the best of our knowledge, our new topological codes have the highest encoding rate of local stabilizer codes with bounded-weight stabilizers in two dimensions. Finally, we show how the boundaries and twist defects of the color code are represented by multiple copies of other phases. Indeed, in addition to the well studied comparison between the color code and two copies of the surface code, we also compare the color code to two copies of the three-fermion model. In particular, we find that this analogy offers a very clear lens through which we can view the symmetries of the color code which gives rise to its multitude of domain walls.
APA, Harvard, Vancouver, ISO, and other styles
25

Yu, Mian Shui, Yu Xie, and Xiao Meng Xie. "Age Classification Based on Feature Fusion." Applied Mechanics and Materials 519-520 (February 2014): 644–50. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.644.

Full text
Abstract:
Age classification based on facial images is attracting wide attention with its broad application to human-computer interaction (HCI). Since human senescence is a tremendously complex process, age classification is still a highly challenging issue. In our study, Local Directional Pattern (LDP) and Gabor wavelet transform were used to extract global and local facial features, respectively, that were fused based on information fusion theory. The Principal Component Analysis (PCA) method was used for dimensionality reduction of the fused features, to obtain a lower-dimensional age characteristic vector. A Support Vector Machine (SVM) multi-class classifier with Error Correcting Output Codes (ECOC) was proposed in the paper. This was aimed at multi-class classification problems, such as age classification. Experiments on a public FG-NET age database proved the efficiency of our method.
APA, Harvard, Vancouver, ISO, and other styles
26

Taubin, Feliks, and Andrey Trofimov. "Concatenated Coding for Multilevel Flash Memory with Low Error Correction Capabilities in Outer Stage." SPIIRAS Proceedings 18, no. 5 (September 19, 2019): 1149–81. http://dx.doi.org/10.15622/sp.2019.18.5.1149-1181.

Full text
Abstract:
One of the approaches to organization of error correcting coding for multilevel flash memory is based on concatenated construction, in particular, on multidimensional lattices for inner coding. A characteristic feature of such structures is the dominance of the complexity of the outer decoder in the total decoder complexity. Therefore the concatenated construction with low-complexity outer decoder may be attractive since in practical applications the decoder complexity is the crucial limitation for the usage of the error correction coding. We consider a concatenated coding scheme for multilevel flash memory with the Barnes-Wall lattice based codes as an inner code and the Reed-Solomon code with correction up to 4…5 errors as an outer one. Performance analysis is fulfilled for a model characterizing the basic physical features of a flash memory cell with non-uniform target voltage levels and noise variance dependent on the recorded value (input-dependent additive Gaussian noise, ID-AGN). For this model we develop a modification of our approach for evaluation the error probability for the inner code. This modification uses the parallel structure of the inner code trellis which significantly reduces the computational complexity of the performance estimation. We present numerical examples of achievable recording density for the Reed-Solomon codes with correction up to four errors as the outer code for wide range of the retention time and number of write/read cycles.
APA, Harvard, Vancouver, ISO, and other styles
27

Timofeev, A. L., and A. Kh Sultanov. "Building a noise-tolerant code based on a holographic representation of arbi-trary digital information." Computer Optics 44, no. 6 (December 2020): 978–84. http://dx.doi.org/10.18287/2412-6179-co-739.

Full text
Abstract:
The article considers a method of error-correcting coding based on the holographic representation of a digital signal. The message encoding process is a mathematical simulation of a hologram created in virtual space by a wave from an input signal source. The code word is a hologram of a point, it is also a one-dimensional zone ruler that carries information about the input data block in the form of an n-bit code of the coordinate of the center of the Fresnel zones. It is shown that the holographic representation of the signal has significantly greater noise immunity and allows you to restore the original digital combination when most of the code message is lost and when the encoded signal is distorted by noise several times higher than the signal level. To assess the noise immunity, the reliability of information transmission over the channel with additive white Gaussian noise is compared using the Reed-Solomon code, the Reed-Muller code, the majority code, and the holographic code. The comparative efficiency of codes in the presence of packet errors caused by the effect of fading due to multipath propagation in radio channels is considered. It is shown that holographic coding provides the correction of packet errors regardless of the length of the packet and its location in the codeword. The holographic code is of interest for transmitting information over channels with a low signal-to-noise ratio (space communications and optical communication systems using free space as a transmission channel, terrestrial, including mobile radio communications), as well as for storing information in systems exposed to ionizing radiation.
APA, Harvard, Vancouver, ISO, and other styles
28

Paler, Alexandru, Austin G. Fowler, and Robert Wille. "Online scheduled execution of quantum circuits protected by surface codes." Quantum Information and Computation 17, no. 15&16 (December 2017): 1335–48. http://dx.doi.org/10.26421/qic17.15-16-5.

Full text
Abstract:
Quantum circuits are the preferred formalism for expressing quantum information processing tasks. Quantum circuit design automation methods mostly use a waterfall approach and consider that high level circuit descriptions are hardware agnostic. This assumption has lead to a static circuit perspective: the number of quantum bits and quantum gates is determined before circuit execution and everything is considered reliable with zero probability of failure. Many different schemes for achieving reliable fault-tolerant quantum computation exist, with different schemes suitable for different architectures. A number of large experimental groups are developing architectures well suited to being protected by surface quantum error correcting codes. Such circuits could include unreliable logical elements, such as state distillation, whose failure can be determined only after their actual execution. Therefore, practical logical circuits, as envisaged by many groups, are likely to have a dynamic structure. This requires an online scheduling of their execution: one knows for sure what needs to be executed only after previous elements have finished executing. This work shows that scheduling shares similarities with place and route methods. The work also introduces the first online schedulers of quantum circuits protected by surface codes. The work also highlights scheduling efficiency by comparing the new methods with state of the art static scheduling of surface code protected fault-tolerant circuits.
APA, Harvard, Vancouver, ISO, and other styles
29

Lee, Ming-Che, Jia-Wei Chang, Tzone I. Wang, and Zi Feng Huang. "Using Variation Theory as a Guiding Principle in an OOP Assisted Syntax Correction Learning System." International Journal of Emerging Technologies in Learning (iJET) 15, no. 14 (July 31, 2020): 35. http://dx.doi.org/10.3991/ijet.v15i14.14191.

Full text
Abstract:
Object-oriented programming skill is important for the software professionals. It has become a mandatory course in information science and computer engineering departments of universities. However, it is hard for novice learners to understand the syntax and semantics of the language while learning object-oriented programming, and that makes them feel frustrated. The purpose of this study is to build an object-oriented programming assistant system that gives syntax error feedback based the variation theory. We established the syntax correction module on the basis of the Virtual Teaching Assistant (VTA). While compiling codes, the system will display syntax errors, if any, with feedbacks that are designed according to the variation theory in different levels (the generation, contrast, separation, and fusion levels) to help them correcting the errors. The experiment design of this study splits the participants, who are university freshmen, into two groups by the S-type method based on the result of a mid-term test. The learning performances and questionnaires were used for surveying, followed by in-depth inter-views, to evaluate the feasibility of the proposed assistant system. The findings indicate that the learners in the experimental group achieved better learning outcomes than their counterparts in the control group. This can also prove that the strategy of using the variation theory in implementing feed-back for object-oriented programming is effective.
APA, Harvard, Vancouver, ISO, and other styles
30

Haeupler, Bernhard, and Amirbehshad Shahrasbi. "Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound." Journal of the ACM 68, no. 5 (October 31, 2021): 1–39. http://dx.doi.org/10.1145/3468265.

Full text
Abstract:
We introduce synchronization strings , which provide a novel way to efficiently deal with synchronization errors , i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to cope with than more commonly considered Hamming-type errors , i.e., symbol substitutions and erasures. For every ε > 0, synchronization strings allow us to index a sequence with an ε -O(1) -size alphabet, such that one can efficiently transform k synchronization errors into (1 + ε)k Hamming-type errors . This powerful new technique has many applications. In this article, we focus on designing insdel codes , i.e., error correcting block codes (ECCs) for insertion-deletion channels. While ECCs for both Hamming-type errors and synchronization errors have been intensely studied, the latter has largely resisted progress. As Mitzenmacher puts it in his 2009 survey [30]: “ Channels with synchronization errors...are simply not adequately understood by current theory. Given the near-complete knowledge, we have for channels with erasures and errors...our lack of understanding about channels with synchronization errors is truly remarkable. ” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate insdel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regimes, these codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straightforward application of our synchronization strings-based indexing method gives a simple black-box construction that transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs into the realm of insdel codes. Most notably, for the complete noise spectrum, we obtain efficient “near-MDS” insdel codes, which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and ε > 0, we give a family of insdel codes achieving a rate of 1 - δ - ε over a constant-size alphabet that efficiently corrects a δ fraction of insertions or deletions.
APA, Harvard, Vancouver, ISO, and other styles
31

Garzon, Max H., and Kiran C. Bobba. "Geometric Approaches to Gibbs Energy Landscapes and DNA Oligonucleotide Design." International Journal of Nanotechnology and Molecular Computation 3, no. 3 (July 2011): 42–56. http://dx.doi.org/10.4018/ijnmc.2011070104.

Full text
Abstract:
DNA codeword design has been a fundamental problem since the early days of DNA computing. The problem calls for finding large sets of single DNA strands that do not crosshybridize to themselves, to each other or to others' complements. Such strands represent so-called domains, particularly in the language of chemical reaction networks (CRNs). The problem has shown to be of interest in other areas as well, including DNA memories and phylogenetic analyses because of their error correction and prevention properties. In prior work, a theoretical framework to analyze this problem has been developed and natural and simple versions of Codeword Design have been shown to be NP-complete using any single reasonable metric that approximates the Gibbs energy, thus practically making it very difficult to find any general procedure for finding such maximal sets exactly and efficiently. In this framework, codeword design is partially reduced to finding large sets of strands maximally separated in DNA spaces and, therefore, the size of such sets depends on the geometry of these spaces. Here, the authors describe in detail a new general technique to embed them in Euclidean spaces in such a way that oligonucleotides with high (low, respectively) hybridization affinity are mapped to neighboring (remote, respectively) points in a geometric lattice. This embedding materializes long-held metaphors about codeword design in analogies with error-correcting code design in information theory in terms of sphere packing and leads to designs that are in some cases known to be provably nearly optimal for small oligonucleotide sizes, whenever the corresponding spherical codes in Euclidean spaces are known to be so. It also leads to upper and lower bounds on estimates of the size of optimal codes of size under 20-mers, as well as to a few infinite families of DNA strand lengths, based on estimates of the kissing (or contact) number for sphere codes in high-dimensional Euclidean spaces. Conversely, the authors show how solutions to DNA codeword design obtained by experimental or other means can also provide solutions to difficult spherical packing geometric problems via these approaches. Finally, the reduction suggests a tool to provide some insight into the approximate structure of the Gibbs energy landscapes, which play a primary role in the design and implementation of biomolecular programs.
APA, Harvard, Vancouver, ISO, and other styles
32

Newman, Michael, and Yaoyun Shi. "Limitations on transversal computation through quantum homomorphic encryption." Quantum Information and Computation 18, no. 11&12 (September 2018): 927–48. http://dx.doi.org/10.26421/qic18.11-12-3.

Full text
Abstract:
Transversality is a simple and effective method for implementing quantum computation fault-tolerantly. However, no quantum error-correcting code (QECC) can transversally implement a quantum universal gate set (Eastin and Knill, {\em Phys. Rev. Lett.}, 102, 110502). Since reversible classical computation is often a dominating part of useful quantum computation, whether or not it can be implemented transversally is an important open problem. We show that, other than a small set of non-additive codes that we cannot rule out, no binary QECC can transversally implement a classical reversible universal gate set. In particular, no such QECC can implement the Toffoli gate transversally.}{We prove our result by constructing an information theoretically secure (but inefficient) quantum homomorphic encryption (ITS-QHE) scheme inspired by Ouyang {\em et al.} (arXiv:1508.00938). Homomorphic encryption allows the implementation of certain functions directly on encrypted data, i.e. homomorphically. Our scheme builds on almost any QECC, and implements that code's transversal gate set homomorphically. We observe a restriction imposed by Nayak's bound ({\em FOCS} 1999) on ITS-QHE, implying that any ITS quantum {\em fully} homomorphic scheme (ITS-QFHE) implementing the full set of classical reversible functions must be highly inefficient. While our scheme incurs exponential overhead, any such QECC implementing Toffoli transversally would still violate this lower bound through our scheme.
APA, Harvard, Vancouver, ISO, and other styles
33

Dahlberg, Axel, and Stephanie Wehner. "Transforming graph states using single-qubit operations." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2123 (May 28, 2018): 20170325. http://dx.doi.org/10.1098/rsta.2017.0325.

Full text
Abstract:
Stabilizer states form an important class of states in quantum information, and are of central importance in quantum error correction. Here, we provide an algorithm for deciding whether one stabilizer (target) state can be obtained from another stabilizer (source) state by single-qubit Clifford operations (LC), single-qubit Pauli measurements (LPM) and classical communication (CC) between sites holding the individual qubits. What is more, we provide a recipe to obtain the sequence of LC+LPM+CC operations which prepare the desired target state from the source state, and show how these operations can be applied in parallel to reach the target state in constant time. Our algorithm has applications in quantum networks, quantum computing, and can also serve as a design tool—for example, to find transformations between quantum error correcting codes. We provide a software implementation of our algorithm that makes this tool easier to apply. A key insight leading to our algorithm is to show that the problem is equivalent to one in graph theory, which is to decide whether some graph G ′ is a vertex-minor of another graph G . The vertex-minor problem is, in general, -Complete, but can be solved efficiently on graphs which are not too complex. A measure of the complexity of a graph is the rank-width which equals the Schmidt-rank width of a subclass of stabilizer states called graph states, and thus intuitively is a measure of entanglement. Here, we show that the vertex-minor problem can be solved in time O (| G | 3 ), where | G | is the size of the graph G , whenever the rank-width of G and the size of G ′ are bounded. Our algorithm is based on techniques by Courcelle for solving fixed parameter tractable problems, where here the relevant fixed parameter is the rank width. The second half of this paper serves as an accessible but far from exhausting introduction to these concepts, that could be useful for many other problems in quantum information. This article is part of a discussion meeting issue ‘Foundations of quantum mechanics and their impact on contemporary society’.
APA, Harvard, Vancouver, ISO, and other styles
34

Alfarano, Gianira N., Karan Khathuria, and Violetta Weger. "A survey on single server private information retrieval in a coding theory perspective." Applicable Algebra in Engineering, Communication and Computing, April 12, 2021. http://dx.doi.org/10.1007/s00200-021-00508-5.

Full text
Abstract:
AbstractIn this paper, we present a new perspective of single server private information retrieval (PIR) schemes by using the notion of linear error-correcting codes. Many of the known single server schemes are based on taking linear combinations between database elements and the query elements. Using the theory of linear codes, we develop a generic framework that formalizes all such PIR schemes. This generic framework provides an appropriate setup to analyze the security of such PIR schemes. In fact, we describe some known PIR schemes with respect to this code-based framework, and present the weaknesses of the broken PIR schemes in a unified point of view.
APA, Harvard, Vancouver, ISO, and other styles
35

Clark, David, and Lindsay Czap. "Minimizing the Cost of Guessing Games." Volume 15, Issue 2 15, no. 2 (September 26, 2018). http://dx.doi.org/10.33697/ajur.2018.015.

Full text
Abstract:
A two-player “guessing game” is a game in which the first participant, the “Responder,” picks a number from a certain range. Then, the second participant, the “Questioner,” asks only yes-or-no questions in order to guess the number. In this paper, we study guessing games with lies and costs. In particular, the Responder is allowed to lie in one answer, and the Questioner is charged a cost based on the content of each question. Guessing games with lies are closely linked to error correcting codes, which are mathematical objects that allow us to detect an error in received information and correct these errors. We will give basic definitions in coding theory and show how error correcting codes allow us to still guess the correct number even if one lie is involved. We will additionally seek to minimize the total cost of our games. We will provide explicit constructions, for any cost function, for games with the minimum possible cost and an unlimited number of questions. We also find minimum cost games for games with a restricted number of questions and a constant cost function. KEYWORDS: Ulam’s Game; Guessing Games With Lies; Error Correcting Codes; Pairwise Balanced Designs; Steiner Triple Systems
APA, Harvard, Vancouver, ISO, and other styles
36

Bae, Jung Hyun, Ahmed Abotabl, Hsien-Ping Lin, Kee-Bong Song, and Jungwon Lee. "An overview of channel coding for 5G NR cellular communications." APSIPA Transactions on Signal and Information Processing 8 (2019). http://dx.doi.org/10.1017/atsip.2019.10.

Full text
Abstract:
AbstractA 5G new radio cellular system is characterized by three main usage scenarios of enhanced mobile broadband (eMBB), ultra-reliable and low latency communications (URLLC), and massive machine type communications, which require improved throughput, latency, and reliability compared with a 4G system. This overview paper discusses key characteristics of 5G channel coding schemes which are mainly designed for the eMBB scenario as well as for partial support of the URLLC scenario focusing on low latency. Two capacity-achieving channel coding schemes of low-density parity-check (LDPC) codes and polar codes have been adopted for 5G where the former is for user data and the latter is for control information. As a coding scheme for data, 5G LDPC codes are designed to support high throughput, a variable code rate and length and hybrid automatic repeat request in addition to good error correcting capability. 5G polar codes, as a coding scheme for control, are designed to perform well with short block length while addressing a latency issue of successive cancellation decoding.
APA, Harvard, Vancouver, ISO, and other styles
37

Rozpędek, Filip, Kyungjoo Noh, Qian Xu, Saikat Guha, and Liang Jiang. "Quantum repeaters based on concatenated bosonic and discrete-variable quantum codes." npj Quantum Information 7, no. 1 (June 22, 2021). http://dx.doi.org/10.1038/s41534-021-00438-7.

Full text
Abstract:
AbstractWe propose an architecture of quantum-error-correction-based quantum repeaters that combines techniques used in discrete- and continuous-variable quantum information. Specifically, we propose to encode the transmitted qubits in a concatenated code consisting of two levels. On the first level we use a continuous-variable GKP code encoding the qubit in a single bosonic mode. On the second level we use a small discrete-variable code. Such an architecture has two important features. Firstly, errors on each of the two levels are corrected in repeaters of two different types. This enables for achieving performance needed in practical scenarios with a reduced cost with respect to an architecture for which all repeaters are the same. Secondly, the use of continuous-variable GKP code on the lower level generates additional analog information which enhances the error-correcting capabilities of the second-level code such that long-distance communication becomes possible with encodings consisting of only four or seven optical modes.
APA, Harvard, Vancouver, ISO, and other styles
38

"Design of Approximate Polar Maximum-Likelihood Decoder." International Journal of Engineering and Advanced Technology 9, no. 2 (December 30, 2019): 5086–92. http://dx.doi.org/10.35940/ijeat.b3336.129219.

Full text
Abstract:
Polar codes, presented by Arikan, accomplish the ability to acquire nearly error-less communication for any given noisy channel of symmetry with "low encoding and decoding complexities" on a huge set of fundamental channels. As of late, polar code turned into the best ideal error-correcting code from the perspective of information theory because of its quality of channel achieving capacity. Though the successive cancellation decoder with approximate computing is efficient, the proposed ML-based decoder is more efficient than the former. As it is equipped with the Modified Processing Element which shows the better performance with the properties of Median Filter. The proposed ML-based decoder diminishes the area and power consumed and logic utilization. In the present paper, effective polar decoder architecture is structured and executed on FPGA utilizing Vertex 5. Here we examine the proposed unique construction that is appropriate for decoding lengthy polar codes with less equipment multifaceted nature.
APA, Harvard, Vancouver, ISO, and other styles
39

Jeong, Jaeho, Seong-Joon Park, Jae-Won Kim, Jong-Seon No, Ha Hyeon Jeon, Jeong Wook Lee, Albert No, Sunghwan Kim, and Hosung Park. "Cooperative sequence clustering and decoding for DNA storage system with fountain codes." Bioinformatics, April 27, 2021. http://dx.doi.org/10.1093/bioinformatics/btab246.

Full text
Abstract:
Abstract Motivation In DNA storage systems, there are tradeoffs between writing and reading costs. Increasing the code rate of error-correcting codes may save writing cost, but it will need more sequence reads for data retrieval. There is potentially a way to improve sequencing and decoding processes in such a way that the reading cost induced by this tradeoff is reduced without increasing the writing cost. In past researches, clustering, alignment and decoding processes were considered as separate stages but we believe that using the information from all these processes together may improve decoding performance. Actual experiments of DNA synthesis and sequencing should be performed because simulations cannot be relied on to cover all error possibilities in practical circumstances. Results For DNA storage systems using fountain code and Reed-Solomon (RS) code, we introduce several techniques to improve the decoding performance. We designed the decoding process focusing on the cooperation of key components: Hamming-distance based clustering, discarding of abnormal sequence reads, RS error correction as well as detection and quality score-based ordering of sequences. We synthesized 513.6 KB data into DNA oligo pools and sequenced this data successfully with Illumina MiSeq instrument. Compared to Erlich’s research, the proposed decoding method additionally incorporates sequence reads with minor errors which had been discarded before, and thus was able to make use of 10.6–11.9% more sequence reads from the same sequencing environment, this resulted in 6.5–8.9% reduction in the reading cost. Channel characteristics including sequence coverage and read-length distributions are provided as well. Availability and implementation The raw data files and the source codes of our experiments are available at: https://github.com/jhjeong0702/dna-storage.
APA, Harvard, Vancouver, ISO, and other styles
40

Ballard, Su. "Information, Noise and et al." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2704.

Full text
Abstract:
The two companions scurry off when they hear a noise at the door. It was only a noise, but it was also a message, a bit of information producing panic: an interruption, a corruption, a rupture of communication. Was the noise really a message? Wasn’t it, rather, static, a parasite? Michael Serres, 1982. Since, ordinarily, channels have a certain amount of noise, and therefore a finite capacity, exact transmission is impossible. Claude Shannon, 1948. Reading Information At their most simplistic, there are two means for shifting information around – analogue and digital. Analogue movement depends on analogy to perform computations; it is continuous and the relationships between numbers are keyed as a continuous ordinal set. The digital set is discrete; moving one finger at a time results in a one-to-one correspondence. Nevertheless, analogue and digital are like the two companions in Serres’ tale. Each suffers the relationship of noise to information as internal rupture and external interference. In their examination of historical constructions of information, Hobart and Schiffman locate the noise of the analogue within its physical materials; they write, “All analogue machines harbour a certain amount of vagueness, known technically as ‘noise’. Which describes the disturbing influences of the machine’s physical materials on its calculations” (208). These “certain amounts of vagueness” are essential to Claude Shannon’s articulation of a theory for information transfer that forms the basis for this paper. In transforming the structures and materials through which it travels, information has left its traces in digital art installation. These traces are located in installation’s systems, structures and materials. The usefulness of information theory as a tool to understand these relationships has until recently been overlooked by a tradition of media art history that has grouped artworks according to the properties of the artwork and/or tied them into the histories of representation and perception in art theory. Throughout this essay I use the productive dual positioning of noise and information to address the errors and impurity inherent within the viewing experiences of digital installation. Information and Noise It is not hard to see why the fractured spaces of digital installation are haunted by histories of information science. In his 1948 essay “The Mathematical Theory of Communication” Claude Shannon developed a new model for communications technologies that articulated informational feedback processes. Discussions of information transmission through phone lines were occurring alongside the development of technology capable of computing multiple discrete and variable packets of information: that is, the digital computer. And, like art, information science remains concerned with the material spaces of transmission – whether conceptual, social or critical. In the context of art something is made to be seen, understood, viewed, or presented as a series of relationships that might be established between individuals, groups, environments, and sensations. Understood this way art is an aesthetic relationship between differing material bodies, images, representations, and spaces. It is an event. Shannon was adamant that information must not be confused with meaning. To increase efficiency he insisted that the message be separated from its components; in particular, those aspects that were predictable were not to be considered information (Hansen 79). The problem that Shannon had to contend with was noise. Unwanted and disruptive, noise became symbolic of the struggle to control the growth of systems. The more complex the system, the more noise needed to be addressed. Noise is both the material from which information is constructed, as well as being the matter which information resists. Weaver (Shannon’s first commentator) writes: In the process of being transmitted, it is unfortunately characteristic that certain things are added to the signal which were not intended by the information source. These unwanted additions may be distortions of sound (in telephony, for example) or static (in radio), or distortions in shape or shading of picture (television), or errors in transmission (telegraphy or facsimile), etc. All of these changes in the transmitted signal are called noise. (4). To enable more efficient message transmission, Shannon designed systems that repressed as much noise as possible, while also acknowledging that without some noise information could not be transmitted. Shannon’s conception of information meant that information would not change if the context changed. This was crucial if a general theory of information transmission was to be plausible and meant that a methodology for noise management could be foregrounded (Pask 123). Without meaning, information became a quantity, a yes or no decision, that Shannon called a “bit” (1). Shannon’s emphasis on separating signal or message from both predicability and external noise appeared to give information an identity where it could float free of a material substance and be treated independently of context. However, for this to occur information would have to become fixed and understood as an entity. Shannon went to pains to demonstrate that the separation of meaning and information was actually to enable the reverse. A fluidity of information and the possibilities for encoding it would mean that information, although measurable, did not have a finite form. Tied into the paradox of this equation is the crucial role of noise or error. In Shannon’s communication model information is not only complicit with noise; it is totally dependant upon it for understanding. Without noise, either encoded within the original message or present from sources outside the channel, information cannot get through. The model of sender-encoder-channel-signal (message)-decoder-receiver that Shannon constructed has an arrow inserting noise. Visually and schematically this noise is a disruption pointing up and inserting itself in the nice clean lines of the message. This does not mean that noise was a last minute consideration; rather noise was the very thing Shannon was working with (and against). It is present in every image we have of information. A source, message, transmitter, receiver and their attendant noises are all material infrastructures that serve to contextualise the information they transmit, receive, and disrupt. Figure 1. Claude Shannon “The Mathematical Theory of Communication” 1948. In his analytical discussion of the diagram, Shannon actually locates noise in two crucial places. The first position accorded noise is external, marked by the arrow that demonstrates how noise is introduced to the message channel whilst in transit. External noise confuses the purity of the message whilst equivocally adding new information. External noise has a particular materiality and enters the equation as unexplained variation and random error. This is disruptive presence rather than entropic coded pattern. Shannon offers this equivocal definition of noise to be everything that is outside the linear model of sender-channel-receiver; hence, anything can be noise if it enters a channel where it is unwelcome. Secondly, noise was defined as unpredictability or entropy found and encoded within the message itself. This for Shannon was an essential and, in some ways, positive role. Entropic forces invited continual reorganisation and (when engaging the laws of redundancy) assisted with the removal of repetition enabling faster message transmission (Shannon 48). Weaver calls this shifting relationship between entropy and message “equivocation” (11). Weaver identified equivocation as central to the manner in which noise and information operated. A process of equivocation identified the receiver’s knowledge. For Shannon, a process of equivocation mediated between useful information and noise, as both were “measured in the same units” (Hayles, Chaos 55). To eliminate noise completely is to sacrifice information. Information understood in this way is also about relationships between differing material bodies, representations, and spaces, connected together for the purposes of transmission. It, like the artwork, is an event. This would appear to suggest a correlation between information transmission and viewing in galleries. Far from it. Although, the contemporary information channel is essentially a tube with fixed walls, (it is still constrained by physical properties, bandwidth and so on) and despite the implicit spatialisation of information models, I am not proposing a direct correlation between information channels and installation spaces. This is because I am not interested in ‘reading’ the information of either environment. What I am suggesting is that both environments share this material of noise. Noise is present in four places. Firstly noise is within the media errors of transmission, and secondly, it is within the media of the installation, (neither of which are one way flows). Thirdly, the viewer or listener introduces noise as interference, and lastly, it is present in the very materials thorough which it travels. Noise layered on noise. Redundancy and Modulation So far in this paper I have discussed the relationship of information to noise. For the remainder, I want to address some particular processes or manifestations of noise in New Zealand artists’ collective, et al.’s maintenance of social solidarity–instance 5 (2006, exhibited as part of the SCAPE Biennal of Art in Public Space, Christchurch Art Gallery). The installation occupies a small alcove that is partially blocked by a military-style portable table stacked with newspapers. Inside the space are three grey wooden chairs, some headphones, and a modified data projection of Google Earth. It is not immediately clear if the viewer is allowed within the spaces of the alcove to listen to the headphones as monotonous voices fill the whole space intoning political, social, and religious platitudes. The headphones might be a tool to block out the noise. In the installation it is as if multiple messages have been sent but their source, channel, and transmitter are unintelligible to the receiver. All that is left is information divorced from meaning. As other works by et al. have demonstrated, social solidarity is not a fundamentalism with directed positions and singular leaders. For example, in rapture (2004) noise disrupts all presence as a portable shed quivers in response to underground nuclear explosions 40,000km away. In the fundamental practice (2005) the viewer is left attempting to decode the un-encoded, as again sound and large steel barriers control and determine only certain movements (see http://www.etal.name/ for some documentation of these projects) . maintenance of social solidarity–instance 5 is a development of the fundamental practice. To enter its spaces viewers slip around the table and find themselves extremely close to the projection screen. Despite the provision of copious media the viewer cannot control any aspect of the environment. On screen, and apparently integral to the Google Earth imagery, are five animated and imposing dark grey monolith forms. Because of their connection to the monotonous voices in the headphones, the monoliths seem to map the imposition of narrative, power, and force in various disputed territories. Like their sudden arrival in Kubrick’s 2001: A Space Odyssey (1968) it is the contradiction of the visibility and improbability of the monoliths that renders them believable. On the video landscape the five monoliths apparently house the dispassionate voices of many different media and political authorities. Their presence is both redundant and essential as they modulate the layering of media forces – and in between, error slips in. In a broad discussion of information Gilles Deleuze and Felix Guattari highlight the necessary role of redundancy commenting that: redundancy has two forms, frequency and resonance; the first concerns the significance of information, the second (I=I) concerns the subjectivity of communication. It becomes apparent that information and communication, and even significance and subjectification, are subordinate to redundancy (79). In maintenance of social solidarity–instance 5 patterns of frequency highlight the necessary role of entropy where it is coded into gaps in the vocal transmission. Frequency is a structuring of information tied to meaningful communication. Resonance, like the stack of un-decodable newspapers on the portable table, is the carrier of redundancy. It is in the gaps between the recorded voices that connections between the monoliths and the texts are made, and these two forms of redundancy emerge. As Shannon says, redundancy is a problem of language. This is because redundancy and modulation do not equate with relationship of signal to noise. Signal to noise is a representational relationship; frequency and resonance are not representational but relational. This means that an image that might be “real-time” interrupts our understanding that the real comes first with representation always trailing second (Virilio 65). In maintenance of social solidarity–instance 5 the monoliths occupy a fixed spatial ground, imposed over the shifting navigation of Google Earth (this is not to mistake Google Earth with the ‘real’ earth). Together they form a visual counterpoint to the texts reciting in the viewer’s ears, which themselves might present as real but again, they aren’t. As Shannon contended, information cannot be tied to meaning. Instead, in the race for authority and thus authenticity we find interlopers, noisy digital images that suggest the presence of real-time perception. The spaces of maintenance of social solidarity–instance 5 meld representation and information together through the materiality of noise. And across all the different modalities employed, the appearance of noise is not through formation, but through error, accident, or surprise. This is the last step in a movement away from the mimetic obedience of information and its adherence to meaning-making or representational systems. In maintenance of social solidarity–instance 5 we are forced to align real time with virtual spaces and suspend our disbelief in the temporal truths that we see on the screen before us. This brief introduction to the work has returned us to the relationship between analogue and digital materials. Signal to noise is an analogue relationship of presence and absence. No signal equals a break in transmission. On the other hand, a digital system, due to its basis in discrete bits, transmits through probability (that is, the transmission occurs through pattern and randomness, rather than presence and absence (Hayles, How We Became 25). In his use of Shannon’s theory for the study of information transmission, Schwartz comments that the shift in information theory from analogue to digital is a shift from an analogue relationship of signal to noise to one of the probability of error (318). As I have argued in this paper, if it is measured as a quantity, noise is productive; it adds information. In both digital and analogue systems it is predictability and repetition that do not contribute information. Von Neumann makes the distinction clear saying that to some extent the “precision” of the digital machine “is absolute.” Even though, error as a matter of normal operation and not solely … as an accident attributable to some definite breakdown, nevertheless creeps in (294). Error creeps in. In maintenance of social solidarity–instance 5, et al. disrupts signal transmission by layering ambiguities into the installation. Gaps are left for viewers to introduce misreadings of scale, space, and apprehension. Rather than selecting meaning out of information within nontechnical contexts, a viewer finds herself in the same sphere as information. Noise imbricates both information and viewer within a larger open system. When asked about the relationship with the viewer in her work, et al. collaborator p.mule writes: To answer the 1st question, communication is important, clarity of concept. To answer the 2nd question, we are all receivers of information, how we process is individual. To answer the 3rd question, the work is accessible if you receive the information. But the question remains: how do we receive the information? In maintenance of social solidarity–instance 5 the system dominates. Despite the use of sound engineering and sophisticated Google Earth mapping technologies, the work appears to be constructed from discarded technologies both analogue and digital. The ominous hovering monoliths suggest answers: that somewhere within this work are methodologies to confront the materialising forces of digital error. To don the headphones is to invite a position that operates as a filtering of power. The parameters for this power are in a constant state of flux. This means that whilst mapping these forces the work does not locate them. Sound is encountered and constructed. Furthermore, the work does not oppose digital and analogue, for as von Neumann comments “the real importance of the digital procedure lies in its ability to reduce the computational noise level to an extent which is completely unobtainable by any other (analogy) procedure” (295). maintenance of social solidarity–instance 5 shows how digital and analogue come together through the productive errors of modulation and redundancy. et al.’s research constantly turns to representational and meaning making systems. As one instance, maintenance of social solidarity–instance 5 demonstrates how the digital has challenged the logics of the binary in the traditions of information theory. Digital logics are modulated by redundancies and accidents. In maintenance of social solidarity–instance 5 it is not possible to have information without noise. If, as I have argued here, digital installation operates between noise and information, then, in a constant disruption of the legacies of representation, immersion, and interaction, it is possible to open up material languages for the digital. Furthermore, an engagement with noise and error results in a blurring of the structures of information, generating a position from which we can discuss the viewer as immersed within the system – not as receiver or meaning making actant, but as an essential material within the open system of the artwork. References Barr, Jim, and Mary Barr. “L. Budd et al.” Toi Toi Toi: Three Generations of Artists from New Zealand. Ed. Rene Block. Kassel: Museum Fridericianum, 1999. 123. Burke, Gregory, and Natasha Conland, eds. et al. the fundamental practice. Wellington: Creative New Zealand, 2005. Burke, Gregory, and Natasha Conland, eds. Venice Document. et al. the fundamental practice. Wellington: Creative New Zealand, 2006. Daly-Peoples, John. Urban Myths and the et al. Legend. 21 Aug. 2004. The Big Idea (reprint) http://www.thebigidea.co.nz/print.php?sid=2234>. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus: Capitalism and Schizophrenia. Trans. Brian Massumi. London: The Athlone Press, 1996. Hansen, Mark. New Philosophy for New Media. Cambridge, MA and London: MIT Press, 2004. Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago and London: U of Chicago P, 1999. Hayles, N. Katherine. Chaos Bound: Orderly Disorder in Contemporary Literature and Science. Ithaca and London: Cornell University, 1990. Hobart, Michael, and Zachary Schiffman. Information Ages: Literacy, Numeracy, and the Computer Revolution. Baltimore: Johns Hopkins UP, 1998. p.mule, et al. 2007. 2 Jul. 2007 http://www.etal.name/index.htm>. Pask, Gordon. An Approach to Cybernetics. London: Hutchinson, 1961. Paulson, William. The Noise of Culture: Literary Texts in a World of Information. Ithaca and London: Cornell University, 1988. Schwartz, Mischa. Information Transmission, Modulation, and Noise: A Unified Approach to Communication Systems. 3rd ed. New York: McGraw-Hill, 1980. Serres, Michel. The Parasite. Trans. Lawrence R. Schehr. Baltimore: John Hopkins UP, 1982. Shannon, Claude. A Mathematical Theory of Communication. July, October 1948. Online PDF. 27: 379-423, 623-656 (reprinted with corrections). 13 Jul. 2004 http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html>. Virilio, Paul. The Vision Machine. Trans. Julie Rose. Bloomington and Indianapolis: Indiana UP, British Film Institute, 1994. Von Neumann, John. “The General and Logical Theory of Automata.” Collected Works. Ed. A. H. Taub. Vol. 5. Oxford: Pergamon Press, 1963. Weaver, Warren. “Recent Contributions to the Mathematical Theory of Communication.” The Mathematical Theory of Commnunication. Eds. Claude Shannon and Warren Weaver. paperback, 1963 ed. Urbana and Chicago: U of Illinois P, 1949. 1-16. Work Discussed et al. maintenance of social solidarity–instance 5 2006. Installation, Google Earth feed, newspapers, sound. Exhibited in SCAPE 2006 Biennial of Art in Public Space Christchurch Art Gallery, Christchurch, September 30-November 12. Images reproduced with the permission of et al. Photographs by Lee Cunliffe. Acknowledgments Research for this paper was conducted with the support of an Otago Polytechnic Resaerch Grant. Photographs of et al. maintenance of social solidarity–instance 5 by Lee Cunliffe. Citation reference for this article MLA Style Ballard, Su. "Information, Noise and et al." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/02-ballard.php>. APA Style Ballard, S. (Oct. 2007) "Information, Noise and et al.," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/02-ballard.php>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography