To see the other types of publications on this topic, follow the link: Error-correction codes (Information theory).

Journal articles on the topic 'Error-correction codes (Information theory)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Error-correction codes (Information theory).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Curto, Carina, Vladimir Itskov, Katherine Morrison, Zachary Roth, and Judy L. Walker. "Combinatorial Neural Codes from a Mathematical Coding Theory Perspective." Neural Computation 25, no. 7 (July 2013): 1891–925. http://dx.doi.org/10.1162/neco_a_00459.

Full text
Abstract:
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
APA, Harvard, Vancouver, ISO, and other styles
2

Raussendorf, Robert. "Key ideas in quantum error correction." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, no. 1975 (September 28, 2012): 4541–65. http://dx.doi.org/10.1098/rsta.2011.0494.

Full text
Abstract:
In this introductory article on the subject of quantum error correction and fault-tolerant quantum computation, we review three important ingredients that enter known constructions for fault-tolerant quantum computation, namely quantum codes, error discretization and transversal quantum gates. Taken together, they provide a ground on which the theory of quantum error correction can be developed and fault-tolerant quantum information protocols can be built.
APA, Harvard, Vancouver, ISO, and other styles
3

Conway, J., and N. Sloane. "Lexicographic codes: Error-correcting codes from game theory." IEEE Transactions on Information Theory 32, no. 3 (May 1986): 337–48. http://dx.doi.org/10.1109/tit.1986.1057187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Semerenko, Vasyl, and Oleksandr Voinalovich. "The simplification of computationals in error correction coding." Technology audit and production reserves 3, no. 2(59) (June 30, 2021): 24–28. http://dx.doi.org/10.15587/2706-5448.2021.233656.

Full text
Abstract:
The object of research is the processes of error correction transformation of information in automated systems. The research is aimed at reducing the complexity of decoding cyclic codes by combining modern mathematical models and practical tools. The main prerequisite for the complication of computations in deterministic linear error-correcting codes is the use of the algebraic representation as the main mathematical apparatus for these types of codes. Despite the universalism of the algebraic approach, its main drawback is the impossibility of taking into account the characteristic features of all subclasses of linear codes. In particular, the cyclic property is not taken into account at all for cyclic codes. Taking this property into account, one can go to a fundamentally different mathematical representation of cyclic codes – the theory of linear automata in Galois fields (linear finite-state machine). For the automaton representation of cyclic codes, it is proved that the problem of syndromic decoding of these codes in the general case is an NP-complete problem. However, if to use the proposed hierarchical approach to problems of complexity, then on its basis it is possible to carry out a more accurate analysis of the growth of computational complexity. Correction of single errors during one time interval (one iteration) of decoding has a linear decoding complexity on the length of the codeword, and error correction during m iterations of permutations of codeword bits has a polynomial complexity. According to three subclasses of cyclic codes, depending on the complexity of their decoding: easy decoding (linear complexity), iteratively decoded (polynomial complexity), complicate decoding (exponential complexity). Practical ways to reduce the complexity of computations are considered: alternate use of probabilistic and deterministic linear codes, simplification of software and hardware implementation by increasing the decoding time, use of interleaving. A method of interleaving is proposed, which makes it possible to simultaneously generate the burst errors and replace them with single errors. The mathematical apparatus of linear automata allows solving together the indicated problems of error correction coding.
APA, Harvard, Vancouver, ISO, and other styles
5

Magdalena de la Fuente, Julio Carlos, Nicolas Tarantino, and Jens Eisert. "Non-Pauli topological stabilizer codes from twisted quantum doubles." Quantum 5 (February 17, 2021): 398. http://dx.doi.org/10.22331/q-2021-02-17-398.

Full text
Abstract:
It has long been known that long-ranged entangled topological phases can be exploited to protect quantum information against unwanted local errors. Indeed, conditions for intrinsic topological order are reminiscent of criteria for faithful quantum error correction. At the same time, the promise of using general topological orders for practical error correction remains largely unfulfilled to date. In this work, we significantly contribute to establishing such a connection by showing that Abelian twisted quantum double models can be used for quantum error correction. By exploiting the group cohomological data sitting at the heart of these lattice models, we transmute the terms of these Hamiltonians into full-rank, pairwise commuting operators, defining commuting stabilizers. The resulting codes are defined by non-Pauli commuting stabilizers, with local systems that can either be qubits or higher dimensional quantum systems. Thus, this work establishes a new connection between condensed matter physics and quantum information theory, and constructs tools to systematically devise new topological quantum error correcting codes beyond toric or surface code models.
APA, Harvard, Vancouver, ISO, and other styles
6

Arora, H. D., and Anjali Dhiman. "Comparative Study of Generalized Quantitative-Qualitative Inaccuracy Fuzzy Measures for Noiseless Coding Theorem and 1:1 Codes." International Journal of Mathematics and Mathematical Sciences 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/258675.

Full text
Abstract:
In coding theory, we study various properties of codes for application in data compression, cryptography, error correction, and network coding. The study of codes is introduced in Information Theory, electrical engineering, mathematics, and computer sciences for the transmission of data through reliable and efficient methods. We have to consider how coding of messages can be done efficiently so that the maximum number of messages can be sent over a noiseless channel in a given time. Thus, the minimum value of mean codeword length subject to a given constraint on codeword lengths has to be founded. In this paper, we have introduced mean codeword length of orderαand typeβfor 1:1 codes and analyzed the relationship between average codeword length and fuzzy information measures for binary 1:1 codes. Further, noiseless coding theorem associated with fuzzy information measure has been established.
APA, Harvard, Vancouver, ISO, and other styles
7

Günlü, Onur, and Rafael Schaefer. "An Optimality Summary: Secret Key Agreement with Physical Unclonable Functions." Entropy 23, no. 1 (December 24, 2020): 16. http://dx.doi.org/10.3390/e23010016.

Full text
Abstract:
We address security and privacy problems for digital devices and biometrics from an information-theoretic optimality perspective to conduct authentication, message encryption/decryption, identification or secure and private computations by using a secret key. A physical unclonable function (PUF) provides local security to digital devices and this review gives the most relevant summary for information theorists, coding theorists, and signal processing community members who are interested in optimal PUF constructions. Low-complexity signal processing methods are applied to simplify information-theoretic analyses. The best trade-offs between the privacy-leakage, secret-key, and storage rates are discussed. Proposed optimal constructions that jointly design the vector quantizer and error-correction code parameters are listed. These constructions include modern and algebraic codes such as polar codes and convolutional codes, both of which can achieve small block-error probabilities at short block lengths, corresponding to a small number of PUF circuits. Open problems in the PUF literature from signal processing, information theory, coding theory, and hardware complexity perspectives and their combinations are listed to stimulate further advancements in the research on local privacy and security.
APA, Harvard, Vancouver, ISO, and other styles
8

Weaver, Nik. "Quantum Graphs as Quantum Relations." Journal of Geometric Analysis 31, no. 9 (January 13, 2021): 9090–112. http://dx.doi.org/10.1007/s12220-020-00578-w.

Full text
Abstract:
AbstractThe “noncommutative graphs” which arise in quantum error correction are a special case of the quantum relations introduced in Weaver (Quantum relations. Mem Am Math Soc 215(v–vi):81–140, 2012). We use this perspective to interpret the Knill–Laflamme error-correction conditions (Knill and Laflamme in Theory of quantum error-correcting codes. Phys Rev A 55:900-911, 1997) in terms of graph-theoretic independence, to give intrinsic characterizations of Stahlke’s noncommutative graph homomorphisms (Stahlke in Quantum zero-error source-channel coding and non-commutative graph theory. IEEE Trans Inf Theory 62:554–577, 2016) and Duan, Severini, and Winter’s noncommutative bipartite graphs (Duan et al., op. cit. in Zero-error communication via quantum channels, noncommutative graphs, and a quantum Lovász number. IEEE Trans Inf Theory 59:1164–1174, 2013), and to realize the noncommutative confusability graph associated to a quantum channel (Duan et al., op. cit. in Zero-error communication via quantum channels, noncommutative graphs, and a quantum Lovász number. IEEE Trans Inf Theory 59:1164–1174, 2013) as the pullback of a diagonal relation. Our framework includes as special cases not only purely classical and purely quantum information theory, but also the “mixed” setting which arises in quantum systems obeying superselection rules. Thus we are able to define noncommutative confusability graphs, give error correction conditions, and so on, for such systems. This could have practical value, as superselection constraints on information encoding can be physically realistic.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Pengfei, Yi Liu, Xiaojie Zhang, Paul H. Siegel, and Erich F. Haratsch. "Syndrome-Coupled Rate-Compatible Error-Correcting Codes: Theory and Application." IEEE Transactions on Information Theory 66, no. 4 (April 2020): 2311–30. http://dx.doi.org/10.1109/tit.2020.2966439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sah, Dhaneshwar. "Iterative Decoding of Turbo Codes." Journal of Advanced College of Engineering and Management 3 (January 10, 2018): 15. http://dx.doi.org/10.3126/jacem.v3i0.18810.

Full text
Abstract:
<p><strong> </strong>This paper presents a Thesis which consists of a study of turbo codes as an error-control Code and the software implementation of two different decoders, namely the Maximum a Posteriori (MAP), and soft- Output Viterbi Algorithm (SOVA) decoders. Turbo codes were introduced in 1993 by berrouet at [2] and are perhaps the most exciting and potentially important development in coding theory in recent years. They achieve near- Shannon-Limit error correction performance with relatively simple component codes and large interleavers. They can be constructed by concatenating at least two component codes in a parallel fashion, separated by an interleaver. The convolutional codes can achieve very good results. In order of a concatenated scheme such as a turbo codes to work properly, the decoding algorithm must affect an exchange of soft information between component decoders. The concept behind turbo decoding is to pass soft information from the output of one decoder to the input of the succeeding one, and to iterate this process several times to produce better decisions. Turbo codes are still in the process of standardization but future applications will include mobile communication systems, deep space communications, telemetry and multimedia. Finally, we will compare these two algorithms which have less complexity and which can produce better performance.</p><p><strong>Journal of Advanced College of Engineering and Management</strong>, Vol.3, 2017, Page: 15-30</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Taubin, Feliks, and Andrey Trofimov. "Concatenated Coding for Multilevel Flash Memory with Low Error Correction Capabilities in Outer Stage." SPIIRAS Proceedings 18, no. 5 (September 19, 2019): 1149–81. http://dx.doi.org/10.15622/sp.2019.18.5.1149-1181.

Full text
Abstract:
One of the approaches to organization of error correcting coding for multilevel flash memory is based on concatenated construction, in particular, on multidimensional lattices for inner coding. A characteristic feature of such structures is the dominance of the complexity of the outer decoder in the total decoder complexity. Therefore the concatenated construction with low-complexity outer decoder may be attractive since in practical applications the decoder complexity is the crucial limitation for the usage of the error correction coding. We consider a concatenated coding scheme for multilevel flash memory with the Barnes-Wall lattice based codes as an inner code and the Reed-Solomon code with correction up to 4…5 errors as an outer one. Performance analysis is fulfilled for a model characterizing the basic physical features of a flash memory cell with non-uniform target voltage levels and noise variance dependent on the recorded value (input-dependent additive Gaussian noise, ID-AGN). For this model we develop a modification of our approach for evaluation the error probability for the inner code. This modification uses the parallel structure of the inner code trellis which significantly reduces the computational complexity of the performance estimation. We present numerical examples of achievable recording density for the Reed-Solomon codes with correction up to four errors as the outer code for wide range of the retention time and number of write/read cycles.
APA, Harvard, Vancouver, ISO, and other styles
12

Massey, J. "Review of 'Theory and Practice of Error Control Codes' (Blahut, R.E.; 1983)." IEEE Transactions on Information Theory 31, no. 4 (July 1985): 553–54. http://dx.doi.org/10.1109/tit.1985.1057072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ben-Gal, Irad, and Lev B. Levitin. "An application of information theory and error-correcting codes to fractional factorial experiments." Journal of Statistical Planning and Inference 92, no. 1-2 (January 2001): 267–82. http://dx.doi.org/10.1016/s0378-3758(00)00165-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lipnitski, V. A., and S. I. Semyonov. "Syndrome spectrums of error orbits in RS-codes." «System analysis and applied information science», no. 1 (March 27, 2020): 34–38. http://dx.doi.org/10.21122/2309-4923-2020-1-34-38.

Full text
Abstract:
This article is devoted to the research of the properties of syndromes of errors in Reed-Solomon codes. RS-codes are built on non-binary alphabets. So, unlike BCH-codes, RS-codes contain an extremely large variety of correctable errors. To correct these errors, a systematic application of automorphisms of codes is proposed. Characteristic automorphisms of RS-codes are cyclic and affine substitutions forming cyclic groups Г and A whose orders coincide with the code length. Cyclic and affine substitutions commute with each other and generate a joint АГ group, what is the product of subgroups A and Г. These three groups act on the space of error vectors of RS-codes, breaking this space into three types of error orbits. As a rule, these orbits are complete and contain the maximum possible number of errors. Syndromes are the main indicator of the presence of errors in each message received by the information system, a means of accurately identifying these errors. The specificity of syndromes of double errors in RS-codes is investigated. Determined that syndrome spectrums of error orbits are also complete in most cases. Proved that the structure of the syndrome spectrums copies the structure of the orbits themselves, which in turn copy the structure of groups of code automorphisms. The results obtained are a significant contribution to the construction of the theory of syndrome norms for RS-codes.
APA, Harvard, Vancouver, ISO, and other styles
15

Sapozhnikov, Valeriy, Vladimir Sapozhnikov, Dmitriy Efanov, and Ruslan Abdullaev. "The specificities of organization of concurrent error detection systems for combinational circuits based on polynomial codes." Proceedings of Petersburg Transport University, no. 3 (September 20, 2018): 432–45. http://dx.doi.org/10.20295/1815-588x-2018-3-432-445.

Full text
Abstract:
Objective: To study the specificities of polynomial codes application during the organization of concurrent error detection systems for combinational logic circuits of automation and computer engineering. Methods: The methods of information theory and coding, the theory of discrete devices and diagnostic engineering of discrete systems were applied. Results: The possibilities of using polynomial codes in the process of combinational logic circuits control organization were analyzed. Some essential properties, inherent in generator polynomials, which make it possible to synthesize self-checking circuits of concurrent error detection systems, were pointed out. Particularly, one of such essential properties is the presence of a constant term in a generator polynomial (otherwise, all the required test patterns are not generated for a complete check of a coding device). An example of concurrent error detection sys- tem implementation for a combinational circuit was given. Some experimental data on error detection in LGSynth’89 combinational benchmarks were described. Practical importance: The use of polynomial codes for combinational circuit control makes it possible to synthesize self-checking discrete devices of automation and computer engineering.
APA, Harvard, Vancouver, ISO, and other styles
16

He, Xianmang. "Constructing new q-ary quantum MDS codes with distances bigger than q/2 from generator matrices." Quantum Information and Computation 18, no. 3&4 (March 2018): 223–30. http://dx.doi.org/10.26421/qic18.3-4-3.

Full text
Abstract:
The construction of quantum error-correcting codes has been an active field of quantum information theory since the publication of \cite{Shor1995Scheme,Steane1998Enlargement,Laflamme1996Perfect}. It is becoming more and more difficult to construct some new quantum MDS codes with large minimum distance. In this paper, based on the approach developed in the paper \cite{NewHeMDS2016}, we construct several new classes of quantum MDS codes. The quantum MDS codes exhibited here have not been constructed before and the distance parameters are bigger than q/2.
APA, Harvard, Vancouver, ISO, and other styles
17

Imai, H., J. Mueller-Quade, A. C. A. Nascimento, P. Tuyls, and A. Winter. "An information theoretical model for quantum secret sharing schemes." Quantum Information and Computation 5, no. 1 (January 2005): 68–79. http://dx.doi.org/10.26421/qic5.1-7.

Full text
Abstract:
Similarly to earlier models for quantum error correcting codes, we introduce a quantum information theoretical model for quantum secret sharing schemes. This model provides new insights into the theory of quantum secret sharing. By using our model, among other results, we give a shorter proof of Gottesman's theorem that the size of the shares in a quantum secret sharing scheme must be as large as the secret itself. Also, we introduced approximate quantum secret sharing schemes and showed robustness of quantum secret sharing schemes by extending Gottesman's theorem to the approximate case.
APA, Harvard, Vancouver, ISO, and other styles
18

Riznyk, V. V., D. Yu Skrybaylo-Leskiv, V. M. Badz, C. I. Hlod, V. V. Liakh, Y. M. Kulyk, N. B. Romanjuk, K. I. Tkachuk, and V. V. Ukrajinets. "COMPARATIVE ANALYSIS OF MONOLITHIC AND CYCLIC NOISE-PROTECTIVE CODES EFFECTIVENESS." Ukrainian Journal of Information Technology 3, no. 1 (2021): 99–105. http://dx.doi.org/10.23939/ujit2021.03.099.

Full text
Abstract:
Comparative analysis of the effectiveness of monolithic and cyclic noise protective codes built on "Ideal Ring Bundles" (IRBs) as the common theoretical basis for synthesis, researches and application of the codes for improving technical indexes of coding systems with respect to performance, reliability, transformation speed, and security has been realized. IRBs are cyclic sequences of positive integers, which form perfect partitions of a finite interval of integers. Sums of connected IRB elements enumerate the natural integers set exactly R-times. The IRB-codes both monolithic and cyclic ones forming on the underlying combinatorial constructions can be used for finding optimal solutions for configure of an applicable coding systems based on the common mathematical platform. The mathematical model of noise-protective data coding systems presents remarkable properties of harmonious developing real space. These properties allow configure codes with useful possibilities. First of them belong to the self-correcting codes due to monolithic arranged both symbols "1" and of course "0" of each allowed codeword. This allows you to automatically detect and correct errors by the monolithic structure of the encoded words. IRB codes of the second type provide improving noise protection of the codes by choosing the optimal ratio of information parameters. As a result of comparative analysis of cyclic IRB-codes based with optimized parameters and monolithic IRB-codes, it was found that optimized cyclic IRB codes have an advantage over monolithic in relation to a clearly fixed number of detected and corrected codes, while monolithic codes favorably differ in the speed of message decoding due to their inherent properties of self-correction and encryption. Monolithic code characterized by packing of the same name characters in the form of solid blocks. The latter are capable of encoding data on several levels at the same time, which expands the ability to encrypt and protect encoded data from unauthorized access. Evaluation of the effectiveness of coding optimization methods by speed of formation of coding systems, method power, and error correcting has been made. The model based on the combinatorial configurations contemporary theory, which can find a wide scientific field for the development of fundamental and applied researches into information technolodies, including application multidimensional models, as well as algorithms for synthesis of the underlying models.
APA, Harvard, Vancouver, ISO, and other styles
19

Scott, A. J., J. Walgate, and B. C. Sanders. "Optimal fingerprinting strategies with one-sided error." Quantum Information and Computation 7, no. 3 (March 2007): 243–64. http://dx.doi.org/10.26421/qic7.3-5.

Full text
Abstract:
Fingerprinting enables two parties to infer whether the messages they hold are the same or different when the cost of communication is high: each message is associated with a smaller fingerprint and comparisons between messages are made in terms of their fingerprints alone. In the simultaneous message passing model, it is known that fingerprints composed of quantum information can be made exponentially smaller than those composed of classical information. For small message lengths, we present constructions of optimal classical fingerprinting strategies with one-sided error, in both the one-way and simultaneous message passing models, and provide bounds on the worst-case error probability with the help of extremal set theory. The performance of these protocols is then compared to that for quantum fingerprinting strategies constructed from spherical codes, equiangular tight frames and mutually unbiased bases.
APA, Harvard, Vancouver, ISO, and other styles
20

Klimo, Martin, Peter Lukáč, and Peter Tarábek. "Deep Neural Networks Classification via Binary Error-Detecting Output Codes." Applied Sciences 11, no. 8 (April 15, 2021): 3563. http://dx.doi.org/10.3390/app11083563.

Full text
Abstract:
One-hot encoding is the prevalent method used in neural networks to represent multi-class categorical data. Its success stems from its ease of use and interpretability as a probability distribution when accompanied by a softmax activation function. However, one-hot encoding leads to very high dimensional vector representations when the categorical data’s cardinality is high. The Hamming distance in one-hot encoding is equal to two from the coding theory perspective, which does not allow detection or error-correcting capabilities. Binary coding provides more possibilities for encoding categorical data into the output codes, which mitigates the limitations of the one-hot encoding mentioned above. We propose a novel method based on Zadeh fuzzy logic to train binary output codes holistically. We study linear block codes for their possibility of separating class information from the checksum part of the codeword, showing their ability not only to detect recognition errors by calculating non-zero syndrome, but also to evaluate the truth-value of the decision. Experimental results show that the proposed approach achieves similar results as one-hot encoding with a softmax function in terms of accuracy, reliability, and out-of-distribution performance. It suggests a good foundation for future applications, mainly classification tasks with a high number of classes.
APA, Harvard, Vancouver, ISO, and other styles
21

Namba, Kazuteru, and Eiji Fujiwara. "Nonbinary single-symbol error correcting, adjacent two-symbol transposition error correcting codes over integer rings." Systems and Computers in Japan 38, no. 8 (2007): 54–60. http://dx.doi.org/10.1002/scj.10516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Haselgrove, H. L., and P. P. Rohde. "Trade-off between the tolerance of located and unlocated errors in nondegenrate quantum." Quantum Information and Computation 8, no. 5 (May 2008): 399–410. http://dx.doi.org/10.26421/qic8.5-3.

Full text
Abstract:
In a recent study [Rohde et al., quant-ph/0603130 (2006)] of several quantum error correcting protocols designed for tolerance against qubit loss, it was shown that these protocols have the undesirable effect of magnifying the effects of depolarization noise. This raises the question of which general properties of quantum error-correcting codes might explain such an apparent trade-off between tolerance to located and unlocated error types. We extend the counting argument behind the well-known quantum Hamming bound to derive a bound on the weights of combinations of located and unlocated errors which are correctable by nondegenerate quantum codes. Numerical results show that the bound gives an excellent prediction to which combinations of unlocated and located errors can be corrected {\em with high probability} by certain large degenerate codes. The numerical results are explained partly by showing that the generalized bound, like the original, is closely connected to the information-theoretic quantity the {\em quantum coherent information}. However, we also show that as a measure of the exact performance of quantum codes, our generalized Hamming bound is provably far from tight.
APA, Harvard, Vancouver, ISO, and other styles
23

Namba, Kazuteru, and Eiji Fujiwara. "A class of systematicm-ary single-symbol error correcting codes." Systems and Computers in Japan 32, no. 6 (2001): 21–28. http://dx.doi.org/10.1002/scj.1030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Shimada, Ryosaku, Ryutaro Murakami, Kazuharu Sono, and Yoshiteru Ohkura. "Arithmetic burst error correcting fire-type cyclic ST-AN codes." Systems and Computers in Japan 18, no. 7 (1987): 57–68. http://dx.doi.org/10.1002/scj.4690180706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Ming-Che, Jia-Wei Chang, Tzone I. Wang, and Zi Feng Huang. "Using Variation Theory as a Guiding Principle in an OOP Assisted Syntax Correction Learning System." International Journal of Emerging Technologies in Learning (iJET) 15, no. 14 (July 31, 2020): 35. http://dx.doi.org/10.3991/ijet.v15i14.14191.

Full text
Abstract:
Object-oriented programming skill is important for the software professionals. It has become a mandatory course in information science and computer engineering departments of universities. However, it is hard for novice learners to understand the syntax and semantics of the language while learning object-oriented programming, and that makes them feel frustrated. The purpose of this study is to build an object-oriented programming assistant system that gives syntax error feedback based the variation theory. We established the syntax correction module on the basis of the Virtual Teaching Assistant (VTA). While compiling codes, the system will display syntax errors, if any, with feedbacks that are designed according to the variation theory in different levels (the generation, contrast, separation, and fusion levels) to help them correcting the errors. The experiment design of this study splits the participants, who are university freshmen, into two groups by the S-type method based on the result of a mid-term test. The learning performances and questionnaires were used for surveying, followed by in-depth inter-views, to evaluate the feasibility of the proposed assistant system. The findings indicate that the learners in the experimental group achieved better learning outcomes than their counterparts in the control group. This can also prove that the strategy of using the variation theory in implementing feed-back for object-oriented programming is effective.
APA, Harvard, Vancouver, ISO, and other styles
26

Riznyk, V. V., and D. Yu Skrybaylo-Leskiv. "IMPROVEMENT OF CYCLIC CODES EFFECTIVENESS BY COMBINATORIAL OPTIMIZATION METHODS." Ukrainian Journal of Information Technology 2, no. 1 (2020): 66–72. http://dx.doi.org/10.23939/ujit2020.02.066.

Full text
Abstract:
The methods of improving the cyclic codes efficiency constructed on the basis of combinatorial configurations of the type "ideal ring bundles" (IRB) s by three factors – correction ability, power of coding method and complexity of the decoding procedure are considered. The method is based on the principle of combinatorial optimization, grounded on the algebraic theory of ordered integer sequences with a circular structure, all the numbers, as well as all sums of consecutive numbers exhaust the value sofnatural row numbers. Two theoretically grounded approaches to increase of noise immunity of cyclic codes are offered: implementation of optimized IRB-code, as well as monolithic and group one. Optimized cyclic IRB-code favorably differs from the rest of the codes of this class by the highest correction capacity at the same length of code words. Optimized IRB-codes constitute a large group of cyclic codes designed on a combinatorial models with selection of corresponding relationships between the parameters of the code to achieve its specified technical characteristics. Noise protected monolithic and group codes belong to the group of self-correcting codes with a ring structure and probabilistic assessment of the level of noise protection. This property allow so instant lydetect a particular part or all invalid characters in the code word by the majority principle. Mathematical calculations have been performed to calculate the optimized ratios between the parameters of cyclic IRB-codes, under which they reach maximum correction capacity. The algorithm of constructing and increasing the power of coding methods of optimized noise-resistant IRB-codes is examined and analyzed. The concrete examples of increase efficiency of combinatorial optimization cyclic codes methods with appropriate calculations and tables are given. The comparative analysis of the IRB-codes with the Golay codes and Bose – Chaudhuri – Hocquenghe (BCH) codes with respect to correction ability, power encoding method and computational complexity of decoding procedures is carried out. The advantages and disadvantages of cyclic, and ringmonolithic and group IRB-codes in comparison with classical analogues are determined. The prospect so fusing the research results in the problems of information and communication technologies are outlined.
APA, Harvard, Vancouver, ISO, and other styles
27

Kuznetsov, Alexandr, Oleg Oleshko, and Kateryna Kuznetsova. "ENERGY GAIN FROM ERROR-CORRECTING CODING IN CHANNELS WITH GROUPING ERRORS." Acta Polytechnica 60, no. 1 (March 2, 2020): 65–72. http://dx.doi.org/10.14311/ap.2020.60.0065.

Full text
Abstract:
Abstract. This article explores the a mathematical model of the a data transmission channel with errors grouping. We propose an estimating method for energy gain from coding and energy efficiency of binary codes in channels with grouped errors. The proposed method uses a simplified Bennet and Froelich’s model and allows leading the research of the energy gain from coding for a wide class of data channels without restricting the way of the length distributing the error bursts. The reliability of the obtained results is confirmed by the information of the known results in the theory of error-correcting coding in the simplified variant.
APA, Harvard, Vancouver, ISO, and other styles
28

Maltiyar, Kaveri, and Deepti Malviya. "Polar Code: An Advanced Encoding And Decoding Architecture For Next Generation 5G Applications." International Journal on Recent and Innovation Trends in Computing and Communication 7, no. 5 (June 4, 2019): 26–29. http://dx.doi.org/10.17762/ijritcc.v7i5.5307.

Full text
Abstract:
Polar Codes become a new channel coding, which will be common to apply for next-generation wireless communication systems. Polar codes, introduced by Arikan, achieves the capacity of symmetric channels with “low encoding and decoding complexity” for a large class of underlying channels. Recently, polar code has become the most favorable error correcting code in the viewpoint of information theory due to its property of channel achieving capacity. Polar code achieves the capacity of the class of symmetric binary memory less channels. In this paper review of polar code, an advanced encoding and decoding architecture for next generation applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Efanov, Dmitry, Valery Sapozhnikov, and Vladimir Sapozhnikov. "Two-Module Weight-Based Sum Code in Residue Ring Modulo M=4." SPIIRAS Proceedings 19, no. 3 (June 1, 2020): 674–713. http://dx.doi.org/10.15622/sp.2020.19.3.8.

Full text
Abstract:
The paper describes research results of features of error detection in data vectors by sum codes. The task is relevant in this setting, first of all, for the use of sum codes in the implementation of the checkable discrete systems and the technical means for the diagnosis of their components. Methods for sum codes constructing are described. A brief overview in the field of methods for sum codes constructing is provided. The article highlights codes for which the values of all data bits are taken into account once by the operations of summing their values or the values of the weight coefficients of the bits during the formation of the check vector. The paper also highlights codes that are formed when the data vectors are initially divided into subsets, in particular, into two subsets. An extension of the sum code class obtained by isolating two independent parts in the data vectors, as well as weighting the bits of the data vectors at the stage of code construction, is proposed. The paper provides a generalized algorithm for two-module weighted codes construction, and describes their features obtained by weighing with non-ones weight coefficients for one of data bits in each of the subvectors, according to which the total weight is calculated. Particular attention is paid to the two-module weight-based sum code, for which the total weight of the data vector in the residue ring modulo M = 4 is determined. It is shown that the purpose of the inequality between the bits of the data vector in some cases gives improvements in the error detection characteristics compared to the well-known two-module codes. Some modifications of the proposed two-module weighted codes are described. A method for calculating the total number of undetectable errors in the two-module sum codes in the residue ring modulo M = 4 with one weighted bit in each of the subsets is proposed. Detailed characteristics of error detection by the considered codes both by the multiplicities of undetectable errors and by their types (unidirectional, symmetrical and asymmetrical errors) are given. The proposed codes are compared with known codes. A method for the synthesis of two-module sum encoders on a standard element base of the single signals adders is proposed. The classification of two-module sum codes is presented.
APA, Harvard, Vancouver, ISO, and other styles
30

Martinez-Mateo, Jesus, David Elkouss, and Vicente Martin. "Blind reconciliation." Quantum Information and Computation 12, no. 9&10 (September 2012): 791–812. http://dx.doi.org/10.26421/qic12.9-10-5.

Full text
Abstract:
Information reconciliation is a crucial procedure in the classical post-processing of quantum key distribution (QKD). Poor reconciliation efficiency, revealing more information than strictly needed, may compromise the maximum attainable distance, while poor performance of the algorithm limits the practical throughput in a QKD device. Historically, reconciliation has been mainly done using close to minimal information disclosure but heavily interactive procedures, like \textit{Cascade}, or using less efficient but also less interactive ---just one message is exchanged--- procedures, like the ones based in low-density parity-check (LDPC) codes. The price to pay in the LDPC case is that good efficiency is only attained for very long codes and in a very narrow range centered around the quantum bit error rate (QBER) that the code was designed to reconcile, thus forcing to have several codes if a broad range of QBER needs to be catered for. Real world implementations of these methods are thus very demanding, either on computational or communication resources or both, to the extent that the last generation of GHz clocked QKD systems are finding a bottleneck in the classical part. In order to produce compact, high performance and reliable QKD systems it would be highly desirable to remove these problems. Here we analyse the use of short-length LDPC codes in the information reconciliation context using a low interactivity, \textit{blind}, protocol that avoids an a priori error rate estimation. We demonstrate that $2 \times 10^3$ bits length LDPC codes are suitable for blind reconciliation. Such codes are of high interest in practice, since they can be used for hardware implementations with very high throughput.
APA, Harvard, Vancouver, ISO, and other styles
31

Mundici, Daniele. "Ulam Games, Łukasiewicz Logic, and AF C*-Algebras." Fundamenta Informaticae 18, no. 2-4 (April 1, 1993): 151–61. http://dx.doi.org/10.3233/fi-1993-182-405.

Full text
Abstract:
Ulam asked what is the minimum number of yes-no questions necessary to find an unknown number in the search space (1, …, 2n), if up to l of the answers may be erroneous. The solutions to this problem provide optimal adaptive l error correcting codes. Traditional, nonadaptive l error correcting codes correspond to the particular case when all questions are formulated before all answers. We show that answers in Ulam’s game obey the (l+2)-valued logic of Łukasiewicz. Since approximately finite-dimensional (AF) C*-algebras can be interpreted in the infinite-valued sentential calculus, we discuss the relationship between game-theoretic notions and their C*-algebraic counterparts. We describe the correspondence between continuous trace AF C*-algebras, and Ulam games with separable Boolean search space S. whose questions are the clopen subspaces of S. We also show that these games correspond to finite products of countable Post MV algebras, as well as to countable lattice-ordered Specker groups with strong unit.
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Sunghoon, Jooyoun Park, and Jun Heo. "Improved reconciliation with polar codes in quantum key distribution." Quantum Information and Computation 18, no. 9&10 (August 2018): 795–813. http://dx.doi.org/10.26421/qic18.9-10-5.

Full text
Abstract:
Quantum key distribution (QKD) is a cryptographic system that generates an information-theoretically secure key shared by two legitimate parties. QKD consists of two parts: quantum and classical. The latter is referred to as classical post-processing (CPP). Information reconciliation is a part of CPP in which parties are given correlated variables and attempt to eliminate the discrepancies between them while disclosing a minimum amount of information. The elegant reconciliation protocol known as \emph{Cascade} was developed specifically for QKD in 1992 and has become the de-facto standard for all QKD implementations. However, the protocol is highly interactive. Thus, other protocols based on linear block codes such as Hamming codes, low-density parity-check (LDPC) codes, and polar codes have been researched. In particular, reconciliation using LDPC codes has been mainly studied because of its outstanding performance. Nevertheless, with small block size, the bit error rate performance of polar codes under successive-cancellation list (SCL) decoding with a cyclic redundancy check (CRC) is comparable to state-of-the-art turbo and LDPC codes. In this study, we demonstrate the use of polar codes to improve the performance of information reconciliation in a QKD system with small block size. The best decoder for polar codes, a CRC-aided SCL decoder, requires CRC-precoded messages. However, messages that are sifted keys in QKD are obtained arbitrarily as a result of a characteristic of the QKD protocol and cannot be CRC-precoded. We propose a method that allows arbitrarily obtained sifted keys to be CRC precoded by introducing a virtual string. Thus the best decoder can be used for reconciliation using polar codes and improves the efficiency of the protocol.
APA, Harvard, Vancouver, ISO, and other styles
33

Kesselring, Markus S., Fernando Pastawski, Jens Eisert, and Benjamin J. Brown. "The boundaries and twist defects of the color code and their applications to topological quantum computation." Quantum 2 (October 19, 2018): 101. http://dx.doi.org/10.22331/q-2018-10-19-101.

Full text
Abstract:
The color code is both an interesting example of an exactly solved topologically ordered phase of matter and also among the most promising candidate models to realize fault-tolerant quantum computation with minimal resource overhead. The contributions of this work are threefold. First of all, we build upon the abstract theory of boundaries and domain walls of topological phases of matter to comprehensively catalog the objects realizable in color codes. Together with our classification we also provide lattice representations of these objects which include three new types of boundaries as well as a generating set for all 72 color code twist defects. Our work thus provides an explicit toy model that will help to better understand the abstract theory of domain walls. Secondly, we discover a number of interesting new applications of the cataloged objects for quantum information protocols. These include improved methods for performing quantum computations by code deformation, a new four-qubit error-detecting code, as well as families of new quantum error-correcting codes we call stellated color codes, which encode logical qubits at the same distance as the next best color code, but using approximately half the number of physical qubits. To the best of our knowledge, our new topological codes have the highest encoding rate of local stabilizer codes with bounded-weight stabilizers in two dimensions. Finally, we show how the boundaries and twist defects of the color code are represented by multiple copies of other phases. Indeed, in addition to the well studied comparison between the color code and two copies of the surface code, we also compare the color code to two copies of the three-fermion model. In particular, we find that this analogy offers a very clear lens through which we can view the symmetries of the color code which gives rise to its multitude of domain walls.
APA, Harvard, Vancouver, ISO, and other styles
34

Jiang, Peng, Jie Luo, Yiqi Wang, Pingji Deng, Bertil Schmidt, Xiangjun Tang, Ningjiang Chen, Limsoon Wong, and Liang Zhao. "kmcEx: memory-frugal and retrieval-efficient encoding of counted k-mers." Bioinformatics 35, no. 23 (April 30, 2019): 4871–78. http://dx.doi.org/10.1093/bioinformatics/btz299.

Full text
Abstract:
Abstract Motivation K-mers along with their frequency have served as an elementary building block for error correction, repeat detection, multiple sequence alignment, genome assembly, etc., attracting intensive studies in k-mer counting. However, the output of k-mer counters itself is large; very often, it is too large to fit into main memory, leading to highly narrowed usability. Results We introduce a novel idea of encoding k-mers as well as their frequency, achieving good memory saving and retrieval efficiency. Specifically, we propose a Bloom filter-like data structure to encode counted k-mers by coupled-bit arrays—one for k-mer representation and the other for frequency encoding. Experiments on five real datasets show that the average memory-saving ratio on all 31-mers is as high as 13.81 as compared with raw input, with 7 hash functions. At the same time, the retrieval time complexity is well controlled (effectively constant), and the false-positive rate is decreased by two orders of magnitude. Availability and implementation The source codes of our algorithm are available at github.com/lzhLab/kmcEx. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
35

Baldi, Marco, Alessandro Barenghi, Franco Chiaraluce, Gerardo Pelosi, and Paolo Santini. "A Finite Regime Analysis of Information Set Decoding Algorithms." Algorithms 12, no. 10 (October 1, 2019): 209. http://dx.doi.org/10.3390/a12100209.

Full text
Abstract:
Decoding of random linear block codes has been long exploited as a computationally hard problem on which it is possible to build secure asymmetric cryptosystems. In particular, both correcting an error-affected codeword, and deriving the error vector corresponding to a given syndrome were proven to be equally difficult tasks. Since the pioneering work of Eugene Prange in the early 1960s, a significant research effort has been put into finding more efficient methods to solve the random code decoding problem through a family of algorithms known as information set decoding. The obtained improvements effectively reduce the overall complexity, which was shown to decrease asymptotically at each optimization, while remaining substantially exponential in the number of errors to be either found or corrected. In this work, we provide a comprehensive survey of the information set decoding techniques, providing finite regime temporal and spatial complexities for them. We exploit these formulas to assess the effectiveness of the asymptotic speedups obtained by the improved information set decoding techniques when working with code parameters relevant for cryptographic purposes. We also delineate computational complexities taking into account the achievable speedup via quantum computers and similarly assess such speedups in the finite regime. To provide practical grounding to the choice of cryptographically relevant parameters, we employ as our validation suite the ones chosen by cryptosystems admitted to the second round of the ongoing standardization initiative promoted by the US National Institute of Standards and Technology.
APA, Harvard, Vancouver, ISO, and other styles
36

Xue, Rui, Tong Wang, Yanbo Sun, and Huaiyu Tang. "Optimized Design for NB-LDPC-Coded High-Order CPM: Power and Iterative Efficiencies." Symmetry 12, no. 8 (August 13, 2020): 1353. http://dx.doi.org/10.3390/sym12081353.

Full text
Abstract:
In this paper, a non-binary low-density parity-check (NB-LDPC) coded high-order continuous phase modulation (CPM) system is designed and optimized to improve power and iterative efficiencies. Firstly, the minimum squared normalized Euclidean distance and the 99% double-sided power bandwidth are introduced to design a competitive CPM, improving its power efficiency under a given code rate and spectral efficiency. Secondly, a three-step method based on extrinsic information transfer (EXIT) and entropy theory is used to design NB-LDPC codes, which reduces the convergence threshold approximately 0.42 and 0.58 dB compared with the candidate schemes. Thirdly, an extrinsic information operation is proposed to address the positive feedback issue in iterative detection and decoding and the value of bit error rate (BER) can approximately be reduced by 5×10−3. Finally, iteration optimization employing the EXIT chart and mutual information between demodulation and decoding is performed to achieve a suitable tradeoff for the communication reliability and iterative decoding delay. Simulation results show that the resulting scheme provides an approximately 3.95 dB coding gain compared to the uncoded CPM and achieves approximately 0.5 and 0.7 dB advantages compared with the candidate schemes. The resulting NB-LDPC-coded high-order CPM for a given code rate and spectral efficiency converges earlier into a turbo cliff region compared with other competitors and significantly improves power and iterative efficiencies.
APA, Harvard, Vancouver, ISO, and other styles
37

Garzon, Max H., and Kiran C. Bobba. "Geometric Approaches to Gibbs Energy Landscapes and DNA Oligonucleotide Design." International Journal of Nanotechnology and Molecular Computation 3, no. 3 (July 2011): 42–56. http://dx.doi.org/10.4018/ijnmc.2011070104.

Full text
Abstract:
DNA codeword design has been a fundamental problem since the early days of DNA computing. The problem calls for finding large sets of single DNA strands that do not crosshybridize to themselves, to each other or to others' complements. Such strands represent so-called domains, particularly in the language of chemical reaction networks (CRNs). The problem has shown to be of interest in other areas as well, including DNA memories and phylogenetic analyses because of their error correction and prevention properties. In prior work, a theoretical framework to analyze this problem has been developed and natural and simple versions of Codeword Design have been shown to be NP-complete using any single reasonable metric that approximates the Gibbs energy, thus practically making it very difficult to find any general procedure for finding such maximal sets exactly and efficiently. In this framework, codeword design is partially reduced to finding large sets of strands maximally separated in DNA spaces and, therefore, the size of such sets depends on the geometry of these spaces. Here, the authors describe in detail a new general technique to embed them in Euclidean spaces in such a way that oligonucleotides with high (low, respectively) hybridization affinity are mapped to neighboring (remote, respectively) points in a geometric lattice. This embedding materializes long-held metaphors about codeword design in analogies with error-correcting code design in information theory in terms of sphere packing and leads to designs that are in some cases known to be provably nearly optimal for small oligonucleotide sizes, whenever the corresponding spherical codes in Euclidean spaces are known to be so. It also leads to upper and lower bounds on estimates of the size of optimal codes of size under 20-mers, as well as to a few infinite families of DNA strand lengths, based on estimates of the kissing (or contact) number for sphere codes in high-dimensional Euclidean spaces. Conversely, the authors show how solutions to DNA codeword design obtained by experimental or other means can also provide solutions to difficult spherical packing geometric problems via these approaches. Finally, the reduction suggests a tool to provide some insight into the approximate structure of the Gibbs energy landscapes, which play a primary role in the design and implementation of biomolecular programs.
APA, Harvard, Vancouver, ISO, and other styles
38

Haeupler, Bernhard, and Amirbehshad Shahrasbi. "Synchronization Strings: Codes for Insertions and Deletions Approaching the Singleton Bound." Journal of the ACM 68, no. 5 (October 31, 2021): 1–39. http://dx.doi.org/10.1145/3468265.

Full text
Abstract:
We introduce synchronization strings , which provide a novel way to efficiently deal with synchronization errors , i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to cope with than more commonly considered Hamming-type errors , i.e., symbol substitutions and erasures. For every ε > 0, synchronization strings allow us to index a sequence with an ε -O(1) -size alphabet, such that one can efficiently transform k synchronization errors into (1 + ε)k Hamming-type errors . This powerful new technique has many applications. In this article, we focus on designing insdel codes , i.e., error correcting block codes (ECCs) for insertion-deletion channels. While ECCs for both Hamming-type errors and synchronization errors have been intensely studied, the latter has largely resisted progress. As Mitzenmacher puts it in his 2009 survey [30]: “ Channels with synchronization errors...are simply not adequately understood by current theory. Given the near-complete knowledge, we have for channels with erasures and errors...our lack of understanding about channels with synchronization errors is truly remarkable. ” Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate insdel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regimes, these codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straightforward application of our synchronization strings-based indexing method gives a simple black-box construction that transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs into the realm of insdel codes. Most notably, for the complete noise spectrum, we obtain efficient “near-MDS” insdel codes, which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and ε > 0, we give a family of insdel codes achieving a rate of 1 - δ - ε over a constant-size alphabet that efficiently corrects a δ fraction of insertions or deletions.
APA, Harvard, Vancouver, ISO, and other styles
39

Efanov, Dmitry Viktorovich. "The Synthesis of Self-Checking Combinational Devices on the Basis of Codes with the Effective Symmetrical Error Detection." SPIIRAS Proceedings 4, no. 59 (July 12, 2018): 62. http://dx.doi.org/10.15622/sp.59.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Yu, Mian Shui, Yu Xie, and Xiao Meng Xie. "Age Classification Based on Feature Fusion." Applied Mechanics and Materials 519-520 (February 2014): 644–50. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.644.

Full text
Abstract:
Age classification based on facial images is attracting wide attention with its broad application to human-computer interaction (HCI). Since human senescence is a tremendously complex process, age classification is still a highly challenging issue. In our study, Local Directional Pattern (LDP) and Gabor wavelet transform were used to extract global and local facial features, respectively, that were fused based on information fusion theory. The Principal Component Analysis (PCA) method was used for dimensionality reduction of the fused features, to obtain a lower-dimensional age characteristic vector. A Support Vector Machine (SVM) multi-class classifier with Error Correcting Output Codes (ECOC) was proposed in the paper. This was aimed at multi-class classification problems, such as age classification. Experiments on a public FG-NET age database proved the efficiency of our method.
APA, Harvard, Vancouver, ISO, and other styles
41

Sason, Igal. "On Data-Processing and Majorization Inequalities for f-Divergences with Applications." Entropy 21, no. 10 (October 21, 2019): 1022. http://dx.doi.org/10.3390/e21101022.

Full text
Abstract:
This paper is focused on the derivation of data-processing and majorization inequalities for f-divergences, and their applications in information theory and statistics. For the accessibility of the material, the main results are first introduced without proofs, followed by exemplifications of the theorems with further related analytical results, interpretations, and information-theoretic applications. One application refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. Another application is related to a study of the quality of approximating a probability mass function, induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. Almost all the analysis is relegated to the appendices, which form the major part of this manuscript.
APA, Harvard, Vancouver, ISO, and other styles
42

Paler, Alexandru, Austin G. Fowler, and Robert Wille. "Online scheduled execution of quantum circuits protected by surface codes." Quantum Information and Computation 17, no. 15&16 (December 2017): 1335–48. http://dx.doi.org/10.26421/qic17.15-16-5.

Full text
Abstract:
Quantum circuits are the preferred formalism for expressing quantum information processing tasks. Quantum circuit design automation methods mostly use a waterfall approach and consider that high level circuit descriptions are hardware agnostic. This assumption has lead to a static circuit perspective: the number of quantum bits and quantum gates is determined before circuit execution and everything is considered reliable with zero probability of failure. Many different schemes for achieving reliable fault-tolerant quantum computation exist, with different schemes suitable for different architectures. A number of large experimental groups are developing architectures well suited to being protected by surface quantum error correcting codes. Such circuits could include unreliable logical elements, such as state distillation, whose failure can be determined only after their actual execution. Therefore, practical logical circuits, as envisaged by many groups, are likely to have a dynamic structure. This requires an online scheduling of their execution: one knows for sure what needs to be executed only after previous elements have finished executing. This work shows that scheduling shares similarities with place and route methods. The work also introduces the first online schedulers of quantum circuits protected by surface codes. The work also highlights scheduling efficiency by comparing the new methods with state of the art static scheduling of surface code protected fault-tolerant circuits.
APA, Harvard, Vancouver, ISO, and other styles
43

Dahlberg, Axel, and Stephanie Wehner. "Transforming graph states using single-qubit operations." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2123 (May 28, 2018): 20170325. http://dx.doi.org/10.1098/rsta.2017.0325.

Full text
Abstract:
Stabilizer states form an important class of states in quantum information, and are of central importance in quantum error correction. Here, we provide an algorithm for deciding whether one stabilizer (target) state can be obtained from another stabilizer (source) state by single-qubit Clifford operations (LC), single-qubit Pauli measurements (LPM) and classical communication (CC) between sites holding the individual qubits. What is more, we provide a recipe to obtain the sequence of LC+LPM+CC operations which prepare the desired target state from the source state, and show how these operations can be applied in parallel to reach the target state in constant time. Our algorithm has applications in quantum networks, quantum computing, and can also serve as a design tool—for example, to find transformations between quantum error correcting codes. We provide a software implementation of our algorithm that makes this tool easier to apply. A key insight leading to our algorithm is to show that the problem is equivalent to one in graph theory, which is to decide whether some graph G ′ is a vertex-minor of another graph G . The vertex-minor problem is, in general, -Complete, but can be solved efficiently on graphs which are not too complex. A measure of the complexity of a graph is the rank-width which equals the Schmidt-rank width of a subclass of stabilizer states called graph states, and thus intuitively is a measure of entanglement. Here, we show that the vertex-minor problem can be solved in time O (| G | 3 ), where | G | is the size of the graph G , whenever the rank-width of G and the size of G ′ are bounded. Our algorithm is based on techniques by Courcelle for solving fixed parameter tractable problems, where here the relevant fixed parameter is the rank width. The second half of this paper serves as an accessible but far from exhausting introduction to these concepts, that could be useful for many other problems in quantum information. This article is part of a discussion meeting issue ‘Foundations of quantum mechanics and their impact on contemporary society’.
APA, Harvard, Vancouver, ISO, and other styles
44

ISOKAWA, TEIJIRO, FUKUTARO ABO, FERDINAND PEPER, SUSUMU ADACHI, JIA LEE, NOBUYUKI MATSUI, and SHINRO MASHIKO. "FAULT-TOLERANT NANOCOMPUTERS BASED ON ASYNCHRONOUS CELLULAR AUTOMATA." International Journal of Modern Physics C 15, no. 06 (July 2004): 893–915. http://dx.doi.org/10.1142/s0129183104006327.

Full text
Abstract:
Cellular Automata (CA) are a promising architecture for computers with nanometer-scale sized components, because their regular structure potentially allows chemical manufacturing techniques based on self-organization. With the increase in integration density, however, comes a decrease in the reliability of the components from which such computers will be built. This paper employs BCH error-correcting codes to construct CA with improved reliability. We construct an asynchronous CA of which a quarter of the (ternary) bits storing a cell's state information may be corrupted without affecting the CA's operations, provided errors are evenly distributed over a cell's bits (no burst errors allowed). Under the same condition, the corruption of half of a cell's bits can be detected.
APA, Harvard, Vancouver, ISO, and other styles
45

Ajana, Soufiane, Niyazi Acar, Lionel Bretillon, Boris P. Hejblum, Hélène Jacqmin-Gadda, Cécile Delcourt, Niyazi Acar, et al. "Benefits of dimension reduction in penalized regression methods for high-dimensional grouped data: a case study in low sample size." Bioinformatics 35, no. 19 (April 1, 2019): 3628–34. http://dx.doi.org/10.1093/bioinformatics/btz135.

Full text
Abstract:
Abstract Motivation In some prediction analyses, predictors have a natural grouping structure and selecting predictors accounting for this additional information could be more effective for predicting the outcome accurately. Moreover, in a high dimension low sample size framework, obtaining a good predictive model becomes very challenging. The objective of this work was to investigate the benefits of dimension reduction in penalized regression methods, in terms of prediction performance and variable selection consistency, in high dimension low sample size data. Using two real datasets, we compared the performances of lasso, elastic net, group lasso, sparse group lasso, sparse partial least squares (PLS), group PLS and sparse group PLS. Results Considering dimension reduction in penalized regression methods improved the prediction accuracy. The sparse group PLS reached the lowest prediction error while consistently selecting a few predictors from a single group. Availability and implementation R codes for the prediction methods are freely available at https://github.com/SoufianeAjana/Blisar. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
46

Lai, Zong Li, and Wen Tao Xu. "Detecting on Forward Error Correction Codes." Advanced Materials Research 760-762 (September 2013): 96–100. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.96.

Full text
Abstract:
Under the development of society, the dissemination of information plays an increasingly significant role. How to achieve the goal of continually reducing the error rate and enhance the quality of communication and construct a highly reliable, efficient and high-speed Broadband Communication System is really a tough task. Here comes the FEC that is one particular type of error correction codes which is introduced to protect the process of data transmitting. In addition to a brief introduction to FEC, this article covers the categories of FEC and their applications along with comparisons and also describes the latest development of these error correction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

MacKay, D. J. C., G. Mitchison, and P. L. McFadden. "Sparse-Graph Codes for Quantum Error Correction." IEEE Transactions on Information Theory 50, no. 10 (October 2004): 2315–30. http://dx.doi.org/10.1109/tit.2004.834737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Helleseth, T., T. Klove, and V. I. Levenshtein. "Error-Correction Capability of Binary Linear Codes." IEEE Transactions on Information Theory 51, no. 4 (April 2005): 1408–23. http://dx.doi.org/10.1109/tit.2005.844080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Newman, Michael, and Yaoyun Shi. "Limitations on transversal computation through quantum homomorphic encryption." Quantum Information and Computation 18, no. 11&12 (September 2018): 927–48. http://dx.doi.org/10.26421/qic18.11-12-3.

Full text
Abstract:
Transversality is a simple and effective method for implementing quantum computation fault-tolerantly. However, no quantum error-correcting code (QECC) can transversally implement a quantum universal gate set (Eastin and Knill, {\em Phys. Rev. Lett.}, 102, 110502). Since reversible classical computation is often a dominating part of useful quantum computation, whether or not it can be implemented transversally is an important open problem. We show that, other than a small set of non-additive codes that we cannot rule out, no binary QECC can transversally implement a classical reversible universal gate set. In particular, no such QECC can implement the Toffoli gate transversally.}{We prove our result by constructing an information theoretically secure (but inefficient) quantum homomorphic encryption (ITS-QHE) scheme inspired by Ouyang {\em et al.} (arXiv:1508.00938). Homomorphic encryption allows the implementation of certain functions directly on encrypted data, i.e. homomorphically. Our scheme builds on almost any QECC, and implements that code's transversal gate set homomorphically. We observe a restriction imposed by Nayak's bound ({\em FOCS} 1999) on ITS-QHE, implying that any ITS quantum {\em fully} homomorphic scheme (ITS-QFHE) implementing the full set of classical reversible functions must be highly inefficient. While our scheme incurs exponential overhead, any such QECC implementing Toffoli transversally would still violate this lower bound through our scheme.
APA, Harvard, Vancouver, ISO, and other styles
50

Metzner, John J. "Soft Information Single Error Correction for Interactive Concatenated Codes." IEEE Transactions on Communications 60, no. 7 (July 2012): 1800–1810. http://dx.doi.org/10.1109/tcomm.2012.061412.110018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography