Dissertations / Theses on the topic 'Décodeur'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Décodeur.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
GUILLOUD, Frédéric. "Architecture générique de décodeur de codes LDPC." Phd thesis, Télécom ParisTech, 2004. http://pastel.archives-ouvertes.fr/pastel-00000806.
Full textAngui, Ettiboua. "Conception d'un circuit intégré VLSI turbo-décodeur." Brest, 1994. http://www.theses.fr/1994BRES2005.
Full textVo, Nguyen Dang Khoa. "Compression vidéo basée sur l'exploitation d'un décodeur intelligent." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4136/document.
Full textThis Ph.D. thesis studies the novel concept of Smart Decoder (SDec) where the decoder is given the ability to simulate the encoder and is able to conduct the R-D competition similarly as in the encoder. The proposed technique aims to reduce the signaling of competing coding modes and parameters. The general SDec coding scheme and several practical applications are proposed, followed by a long-term approach exploiting machine learning concept in video coding. The SDec coding scheme exploits a complex decoder able to reproduce the choice of the encoder based on causal references, eliminating thus the need to signal coding modes and associated parameters. Several practical applications of the general outline of the SDec scheme are tested, using different coding modes during the competition on the reference blocs. Despite the choice for the SDec reference block being still simple and limited, interesting gains are observed. The long-term research presents an innovative method that further makes use of the processing capacity of the decoder. Machine learning techniques are exploited in video coding with the purpose of reducing the signaling overhead. Practical applications are given, using a classifier based on support vector machine to predict coding modes of a block. The block classification uses causal descriptors which consist of different types of histograms. Significant bit rate savings are obtained, which confirms the potential of the approach
Martinet, Jacques. "Réalisation d'un turbo-décodeur paramétrable et modulaire en VLSI." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0005/MQ44925.pdf.
Full textBensadek, Khalid. "Développement d'un modèle VHDL synthétisable d'un décodeur de Viterbi." Mémoire, École de technologie supérieure, 2004. http://espace.etsmtl.ca/702/1/BENSADEK_Khalid.pdf.
Full textLauzon, Marc. "Réalisation d'un égaliseur à retour d'état avec décodeur intégré." Mémoire, École de technologie supérieure, 2001. http://espace.etsmtl.ca/828/1/LAUZON_Marc.pdf.
Full textHarb, Hassan. "Conception du décodeur NB-LDPC à débit ultra-élevé." Thesis, Lorient, 2018. http://www.theses.fr/2018LORIS504/document.
Full textThe Non-Binary Low Density Parity Check (NB-LDPC) codes constitutes an interesting category of error correction codes, and are well known to outperform their binary counterparts. However, their non-binary nature makes their decoding process of higher complexity. This PhD thesis aims at proposing new decoding algorithms for NB-LDPC codes that will be shaping the resultant hardware architectures expected to be of low complexity and high throughput rate. The first contribution of this thesis is to reduce the complexity of the Check Node (CN) by minimizing the number of messages being processed. This is done thanks to a pre-sorting process that sorts the messages intending to enter the CN based on their reliability values, where the less likely messages will be omitted and consequently their dedicated hardware part will be simply removed. This reliability-based sorting enabling the processing of only the highly reliable messages induces a high reduction of the hardware complexity of the NB-LDPC decoder. Clearly, this hardware reduction must come at no significant performance degradation. A new Hybrid architectural CN model (H-CN) combining two state-of-the-art algorithms - Forward-Backward CN (FB-CN) and Syndrome Based CN (SB-CN) - has been proposed. This hybrid model permits to effectively exploit the advantages of pre-sorting. This thesis proposes also new methods to perform the Variable Node (VN) processing in the context of pre-sorting-based architecture. Different examples of implementation of NB-LDPC codes defined over GF(64) and GF(256) are presented. For decoder to run faster, it must become parallel. From this perspective, we have proposed a new efficient parallel decoder architecture for a 5/6 rate NB-LDPC code defined over GF(64). This architecture is characterized by its fully parallel CN architecture receiving all the input messages in only one clock cycle. The proposed new methodology of parallel implementation of NB-LDPC decoders constitutes a new vein in the hardware conception of ultra-high throughput rate decoders. Finally, since the NB-LDPC decoders requires the implementation of a sorting function to extract P minimum values among a list of size Ns, a chapter is dedicated to this problematic where an original architecture called First-Then-Second-Extrema-Selection (FTSES) has been proposed
Ouadid, Abdelkarim. "Prototype micro-électronique d'un décodeur itératif pour des codes doublement orthogonaux." Mémoire, École de technologie supérieure, 2004. http://espace.etsmtl.ca/715/1/OUADID_Abdelkarim.pdf.
Full textRaoul, Olivier. "Conception et performances d'un circuit intégré turbo décodeur de codes produits." Brest, 1997. http://www.theses.fr/1997BRES2030.
Full textSingh, Arun Kumar. "Le compromis Débit-Fiabilité-Complexité dans les systèmes MMO multi-utilisateurs et coopératifs avec décodeurs ML et Lattice." Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0005/document.
Full textIn telecommunications, rate-reliability and encoding-decoding computational complexity (floating point operations - flops), are widely considered to be limiting and interrelated bottlenecks. For this reason, any attempt to significantly reduce complexity may be at the expense of a substantial degradation in error-performance. Establishing this intertwined relationship constitutes an important research topic of substantial practical interest. This dissertation deals with the question of establishing fundamental rate, reliability and complexity limits in general outage-limited multiple-input multiple-output (MIMO) communications, and its related point-to-point, multiuser, cooperative, two-directional, and feedback-aided scenarios. We explore a large subset of the family of linear lattice encoding methods, and we consider the two main families of decoders; maximum likelihood (ML) based and lattice-based decoding. Algorithmic analysis focuses on the efficient bounded-search implementations of these decoders, including a large family of sphere decoders. Specifically, the presented work provides high signal-to-noise (SNR) analysis of the minimum computational reserves (flops or chip size) that allow for a) a certain performance with respect to the diversity-multiplexing gain tradeoff (DMT) and for b) a vanishing gap to the uninterrupted (optimal) ML decoder or a vanishing gap to the exact implementation of (regularized) lattice decoding. The derived complexity exponent describes the asymptotic rate of exponential increase of complexity, exponential in the number of codeword bits
Le, Trung Khoa. "Nouvelle approche pour une implémentation matérielle à faible complexité du décodeur PGDBF." Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0902/document.
Full textProbabilistic Gradient Descent Bit Flipping (PGDBF) algorithm have been recently introduced as a new type of hard decision decoder for Low-Density Parity-Check Code (LDPC) applied on the Binary Symmetric Channel. By following precisely the decoding steps of the deterministic Gradient Descent Bit-Flipping (GDBF) decoder, PGDBF additionally incorporates a random perturbation in the ipping operation of Variable Nodes (VNs) and produces an outstanding decoding performance which is better to all known Bit Flipping decoders, approaching the performance of soft decision decoders. We propose in this thesis several hardware implementations of PGDBF, together with a theoretical analysis of its error correction capability. With a Markov Chain analysis of the decoder, we show that, due to the incorporation of random perturbation in VN processing, the PGDBF escapes from the trapping states which prevent the convergence of decoder. Also, with the new proposed analysis method, the PGDBF performance can be predicted and formulated by a Frame Error Rate equation as a function of the iteration, for a given error pattern. The analysis also gives a clear explanation on several phenomenons of PGDBF such as the gain of re-decoding (or restarting) on a received error pattern. The implementation issue of PGDBF is addressed in this thesis. The conventional implementation of PGDBF, in which a probabilistic signal generator is added on top of the GDBF, is shown with an inevitable increase in hardware complexity. Several methods for generating the probabilistic signals are introduced which minimize the overhead complexity of PGDBF. These methods are motivated by the statistical analysis which reveals the critical features of the binary random sequence required to get good decoding performance and suggesting the simpli cation directions. The synthesis results show that the implemented PGDBF with the proposed probabilistic signal generator method requires a negligible extra complexity with the equivalent decoding performance to the theoretical PGDBF. An interesting and particular implementation of PGDBF for the Quasi-Cyclic LPDC (QC-LPDC) is shown in the last part of the thesis. Exploiting the structure of QC-LPDC, a novel architecture to implement PGDBF is proposed called Variable-Node Shift Architecture (VNSA). By implementing PGDBF by VNSA, it is shown that the decoder complexity is even smaller than the deterministic GDBF while preserving the decoding performance as good as the theoretical PGDBF. Furthermore, VNSA is also shown to be able to apply on other types of LDPC decoding algorithms
Ternet, François. "Caloduc miniature pour le refroidissement passif des composants électroniques d'un décodeur Orange." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC221.
Full textThis report presents the study of a passive two-phase cooling of a television decoder using heat pipe. It is composed into two main parts: a first part concerns the numerical studies and the second an experimentalstudy. Numerical study is conducted in order to determine the geometric and physico-chemicalcharacteristics of heat pipes in order to optimally cool the TV decoder. Two numerical analyses arecarried out: a first one, which is analytical model that is based on the global study of the heat pipe inorder to determine the maximum heat flux that can be dissipated. Different working fluid could bestudied and various architectural design of heat pipe are tested. Different fluids are tested in order todetermine the best configuration of the micro-channel respecting heat pipes working limitations. Asecond model is carried out to characterize the local physical parameters such as: pressure in the liquidand vapour phases, temperature, thermal resistances, capillary radius, etc. This second simulation iscarried out by a Runge-Kutta method to solve differential equations. In the experimental part, an experimentalset up is has been installed in the laboratory to study heat pipes performances under variousexperimental conditions. A filling system has been developed for heat pipes in order to test variousworking fluids and different charges. Finally, the best configuration of the heat pipe is tested to coolOrange decoder. Different tests are conducted previously in order to make characterization of the conventionalcooling system and heat pipe cooling mod
Pouliot, Louis Edmond. "Prototypage rapide d'un décodeur en traillis modulaire/hypercube pour des systèmes de communications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq26261.pdf.
Full textDkhissi, El Houssine. "Étude et évaluation de la stabilité d'un égaliseur DFE avec décodeur partiellement intégré." Mémoire, École de technologie supérieure, 2003.
Find full textEkobo, Akoa Brice. "Détection et conciliation d'erreurs intégrées dans un décodeur vidéo : utilisation des techniques d'analyse statistique." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT069/document.
Full textThis report presents the research conducted during my PhD, which aims to develop an efficient algorithm for correcting errors in a digital image decoding process and ensure a high level of visual quality of decoded images. Statistical analysis techniques are studied to detect and conceal the artefacts. A control loop is implemented for the monitoring of image visual quality. The manuscript consists in six chapters. The first chapter presents the principal state of art image quality assessment methods and introduces our proposal. This proposal consists in a video quality measurement tool (VQMT) using the Human Visual System to indicate the visual quality of a video (or an image). Three statistical learning models of VQMT are designed. They are based on classification, artificial neural networks and non-linear regression and are developed in the second, third and fourth chapter respectively. The fifth chapter presents the principal state of art image error concealment technics. The latter chapter uses the results of the four former chapters to design an algorithm for error concealment in images. The demonstration considers blur and noise artefacts and is based on the Wiener filter optimized on the criterion of local linear minimum mean square error. The results are presented and discussed to show how the VQMT improves the performances of the implemented algorithm for error concealment
Planells-Rodríguez, Milena. "Modélisation des erreurs en sortie du décodeur dans une chaîne de transmission par satellite." Paris, ENST, 2003. http://www.theses.fr/2003ENST0023.
Full textThis dissertation studies the behavior of the errors at the output of the decoder on a satellite communication system. Two different types of channel coding are considered. On one hand, a classical concatenation of a Reed-Solomon with a convolutional code and interleaving. On the other hand, a code from the turbo-codes family. The algorithm used in the convolutional decoding of the first coding system is the maximum likelihood algorithm. It is known that errors at the output of this algorithm are grouped in bursts due to the memory of the code. The group of correct bits between bursts is called a gap. Thus, the output of a maximum likelihood decoder can be modeled by a Markov chain with two states: a first state where no errors take place (good state) and a second state where errors appear in bursts (bad state). Regarding the burst modeling, the previous proposed models did not fit the simulation results for low and average burst lengths. Therefore, we have developed a new model based on the properties of the code that fits the range of all possible bursts lengths. On the second coding system, instead of using a maximum likelihood decoding, iterative decoding based on the successive decoding of each constituent code is considered. These iterative decoding algorithms are based on the Maximum A Posteriori (MAP) principle. This dissertation analyses the behavior of the errors at the output of such iterative decoders and proposes a model that fits quite well with the real errors simulated via Monte Carlo simulations
Thiesse, Jean-Marc. "Codage vidéo flexible par association d'un décodeur intelligent et d'un encodeur basé optimisation débit-distorsion." Phd thesis, Université de Nice Sophia-Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00719058.
Full textThiesse, Jean-Marc. "Codage vidéo flexible par association d'un décodeur intelligent et d'un encodeur basé optimisation débit-distorsion." Nice, 2012. http://www.theses.fr/2012NICE4058.
Full textThis Ph. D. Thesis deals with the improvement of video compression efficiency. Both conventional and breakthrough approaches are investigated in order to propose efficient methods for Intra and Inter coding dedicated to next generations video coding standards. Two tools are studied for the conventional approach. First, syntax elements are cleverly transmitted using a data hiding based method which allows embedding indices into the luminance and chrominance residuals in an optimal way, rate-distortion wise. Secondly, the large motion redundancies are exploited to improve the motion vectors coding. After a statistical analysis of the previously used vectors, an accurate forecast is performed to favor some vector residuals during a last step which modifies the original residual distribution. 90% of the coded vectors are efficiently forecasted by this method which helps to significantly reduce their coding cost. The breakthrough approach comes from the observation of the H. 264/AVC standard and its successor HEVC which are based on a predictive scheme with multiple coding choices, consequently future improvements shall improve texture by extensively using the competition between many coding modes. However, such schemes are bounded by the cost generated by the signaling flags and therefore it is required to transfer some decisions to the decoder side. A framework based on the determination of encoding parameters at both encoder and decoder side is consequently proposed and applied to Intra prediction modes on the one hand, and to the emerging theory of compressed sensing on the other hand. Promising results are reported and confirm the potential of such an innovative solution
Ta, Thomas. "Implémentation sur FPGA d'un turbo codeur-décodeur en blocs à haut débit avec une faible complexité." Rennes 1, 2003. http://www.theses.fr/2003REN1S145.
Full textEl, chall Rida. "Récepteur itératif pour les systèmes MIMO-OFDM basé sur le décodage sphérique : convergence, performance et complexité." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0019/document.
Full textRecently, iterative processing has been widely considered to achieve near-capacity performance and reliable high data rate transmission, for future wireless communication systems. However, such an iterative processing poses significant challenges for efficient receiver design. In this thesis, iterative receiver combining multiple-input multiple-output (MIMO) detection with channel decoding is investigated for high data rate transmission. The convergence, the performance and the computational complexity of the iterative receiver for MIMO-OFDM system are considered. First, we review the most relevant hard-output and soft-output MIMO detection algorithms based on sphere decoding, K-Best decoding, and interference cancellation. Consequently, a low-complexity K-best (LCK- Best) based decoder is proposed in order to substantially reduce the computational complexity without significant performance degradation. We then analyze the convergence behaviors of combining these detection algorithms with various forward error correction codes, namely LTE turbo decoder and LDPC decoder with the help of Extrinsic Information Transfer (EXIT) charts. Based on this analysis, a new scheduling order of the required inner and outer iterations is suggested. The performance of the proposed receiver is evaluated in various LTE channel environments, and compared with other MIMO detection schemes. Secondly, the computational complexity of the iterative receiver with different channel coding techniques is evaluated and compared for different modulation orders and coding rates. Simulation results show that our proposed approaches achieve near optimal performance but more importantly it can substantially reduce the computational complexity of the system. From a practical point of view, fixed-point representation is usually used in order to reduce the hardware costs in terms of area, power consumption and execution time. Therefore, we present efficient fixed point arithmetic of the proposed iterative receiver based on LC-KBest decoder. Additionally, the impact of the channel estimation on the system performance is studied. The proposed iterative receiver is tested in a real-time environment using the MIMO WARP platform
Moretti, Sofia. "“Dyslexie: panne de décodeur et de séquenceur”. Proposta di traduzione di un testo scientifico-divulgativo dal francese all’italiano." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21342/.
Full textGoalic, André. "Etude et réalisation de codeur/décodeur de parole à bas débit pour la téléphonie numérique acoustique sous-marine." Brest, 1994. http://www.theses.fr/1994BRES2003.
Full textMarchand, Cédric. "Étude et implémentation d'un décodeur LDPC pour les nouvelles normes de diffusion de télévision numérique (DVD-T2 et S2)." Lorient, 2010. https://hal.archives-ouvertes.fr/tel-01151985.
Full textLDPC codes are, like turbo-codes, able to achieve decoding performance close to the Shannon limit. The performance associated with relatively easy implementation makes this solution very attractive to the digital communication systems. This is the case for the Digital video broadcasting by satellite in the DVB-S2 standard that was the first standard including an LDPC. This thesis subject is about the optimization of the implementation of an LDPC decoder for the DVB-S2, -T2 and -C2 standards. After a state-of-the-art overview, the layered decoder is chosen as the basis architecture for the decoder implementation. We had to deal with the memory conflicts due to the matrix structure specific to the DVB-S2, -T2, -C2 standards. Two new contributions have been studied to solve the problem. The first is based on the construction of an equivalent matrix and the other relies on the repetition of layers. The conflicts inherent to the pipelined architecture are solved by an efficient scheduling found with the help of graph theories. Memory size is a major point in term of area and consumption, therefore the reduction to a minimum of this memory is studied. A well defined saturation and an optimum partitioning of memory bank lead to a significant reduction compared to the state-of-the-art. Moreover, the use of single port RAM instead of dual port RAM is studied to reduce memory cost. In the last chapter we answer to the need of a decoder able to decode in parallel x streams with a reduced cost compared to the use of x decoders
Bouchoux, Sophie. "Apport de la reconfiguration dynamique au traitement d'images embarqué : étude de cas : implantation du décodeur entropique de JPEG 2000." Dijon, 2005. http://www.theses.fr/2005DIJOS027.
Full textThe appearance on the market of partially and quickly reprogrammable FPGAs, led to the development of new techniques, like dynamic reconfiguration. In order to study improvement of dynamic reconfiguration in comparison with static configuration, an electronic board was developed : the ARDOISE board. This thesis relates to the implementation of JPEG 2000 algorithm, and particularly of the entropic decoder, on this architecture and to the study of the performances obtained. To carry out a comparison of the results between the two methods, some evaluation criteria relating to costs, performances and efficiencies were defined. Implementations carried out are : implementation in partial dynamic reconfiguration of the arithmetic decoder on ARDOISE, implementation in static configuration of the entropic decoder on a Xilinx FPGA and implementation in dynamic reconfiguration of the entropic decoder on ARDOISE
Ouertani, Rym. "Algorithmes de décodage pour les systèmes multi-antennes à complexité réduite." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00718214.
Full textLima, Léocarlos. "Architecture de décodage pour codes algébriques-géométriques basés sur des courbes d'Hermite." Paris, ENST, 2004. http://www.theses.fr/2004ENST0039.
Full textThis thesis consists on a description of an efficient architecture for a decoding algorithm of algebraic-geometric codes (AG codes) based on Hermitian curves. This work embraces two distinct complementing competences : the study of decoding algorithms for AG codes and the development of architectures for hardware implementation of these decoders. The algorithm, object of this work, searches error locator and evaluator functions iteratively that satisfy a key equation criterion. A new architecture is proposed for this decoder. Optimized operators to implement the most frequent calculations in the decoder are still proposed. The description of the architecture of this decoder follows the description of architectures for arithmetical units in finite fields of characteristic 2, necessary to implemente any channel coding / decoding system using block codes
Benhallam, Abdelahad. "Contribution à l'étude de l'algorithme à pile pour le décodage séquentiel des codes convolutionnels : conception d'un logiciel de simulation du décodeur." Toulouse, INPT, 1988. http://www.theses.fr/1988INPT083H.
Full textGnaedig, David. "High-Speed decoding of convolutional Turbo Codes." Lorient, 2005. http://www.theses.fr/2005LORIS050.
Full textTurbo codes are built as a concatenation of several convolutional codes separated by interleavers. In 1993, they have revolutionized error correcting coding by approaching within a few tenths of a decibel the Shannon limit. This performance is even more astonishing because the iterative decoding principle enables the decoder to be implemented in hardware with a relative low complexity. Due to their success, they are now widely used in practical systems and open standards. The increasing demand for high throughput applications in broadband applications is strong)y calling for high-speed decoder implementations, thus leading to new challenges. The objective of this thesis is to study high-throughput decoding architectures offering the best throughput versus complexity trade-off. We first laid down a simple expression to evaluate the benefits of an architecture in terms of throughput and efficiency. The application of this model to turbo decoding highlighted three typical parameters influencing the throughput and efficiency of the decoder : the degree of parallelism, the ratio of utilization (activity) of the processing units and the clock frequency. We tackled each of these points by investigating a large spectrum of possibilities of the design space, ranging from the joint code and decoder design to the optimization of the decoder architecture for a given code or set of codes. We first proposed a new coding scheme called Multiple Slice Turbo Codes making possible to minimize the memory requirements of the decoder using the parallel decoding of a the received codeword by several soft-input soft-output processors. In order to solve the resulting concurrent accesses to the memory, we designed a novel hierarchical interleaver. Second, we explored several solutions for improving the activity of the processors including the usage of a hybrid parallel/serial architecture and the introduction of two new schedules for parallel decoding: one schedule internal to the processors, and another at a more global level in association with an adapted constrained interleaver. Finally, thanks to an original method to reduce the critical path in the recursive computation of state metrics, we obtained, at no cost on a FPGA circuit, a doubling of the maximal clock frequency of the decoder. Most of the w techniques developed in this thesis were validated by designing a turbo decoder for the wireless broadband access standard WiMAX (IEEE 802. 16) that achieves excellent error decoding performance reaching a throughput of 100Mbit/s on a single FPGA
Verdier, François. "Conception d'architectures embarquées : des décodeurs LDPC aux systèmes sur puce reconfigurables." Habilitation à diriger des recherches, Université de Cergy Pontoise, 2006. http://tel.archives-ouvertes.fr/tel-00524534.
Full textNafkha, Amor. "A geometrical approach detector for solving the combinatorial optimisation problem : application in wireless communication systems." Lorient, 2006. http://www.theses.fr/2006LORIS067.
Full textThe demand for mobile communication systems with high data rates and improved link quality for a variety of applications has dramatically increased in recent years. New concepts and methods are necessary in order to cover this huge demand, which counteract or take advantage of the impairments of the mobile communication channel and optimally exploit the limited resources such as bandwidth and power. The problem of finding the least-squares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficients and given vector are comprised of real numbers, arise in many applications: communications, cryptography, MC-CDMA, MIMO, to name a few. The Maximum Likelihood (ML) decoding is equivalent to finding the closest lattice point in an n-dimensional real space. In general, this problem is known to be non deterministic NP hard. In this thesis, a polynomial-time approximation method called Geometrical Intersection and Selection Detector (GISD) is applied to the MLD problem. Moreover, the proposed approach is based on two complementary "real time" operational research methods: intensification and diversification. Our approach has three important characteristics that make it very attractive for for VLSI implementation. First, It will be shown that the performance of GISD receiver is superior as compared to other sub-optimal detection methods and it provides a good approximation to the optimal detector. Second, the inherent parallel structure of the proposed method leads to a very suitable hardware implementation. Finaly, The GISD allows a near optimal performance with constant polynomial-time, O(n3), computational complexity (unlike the sphere decoding that has exponential-time complexity for low SNR). The proposed Detector can be efficiently employed in most wireless communications systems: MIMO, MC-CDMA, MIMO-CDMA etc. .
Lartigaud, David-Olivier. "Décoder le software art." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010707.
Full textThe years since the late 90s have se en the emergence of a range of artistic practices using software as a critical base material. A number of exhibitions, events, websites and writings have been dedicated to their productions, coming under the designation of "Software Art". One of the main interests of this artistic "movement" lies in its bringing up to date the issue of computer programming in art, one that has been acknowledged since the 60s and is correlated with various questionings addressing digital culture, the aesthetic approach to code and the position of such productions with regard to art history. From a chronological presentation of the events, publications and artworks in relation to Software Art, this thesis puts forward an analytical and critical study, with a view to explaining in what ways this "movement" singles itself out from a broader phenomenon - as has been witnessed for a decade - of renewed interest in computer programming in art, notably through the use of a programming language such as Processing. The intent of this investigation is not to build a theoretical apparatus of any use to Software Art, but rather to understand the stakes at hand at the time of its visibility - from 2001 to 2006 approximately -, both with respect to the establishment of a specific critical field and the implementation of artistic and institutional strategies in order to provide the conditions tor its diffusion
Khalili, Ramin. "Des propriétés de transmission pour la couche IP." Paris 6, 2005. http://www.theses.fr/2005PA066516.
Full textDanjean, Ludovic. "Algorithmes itératifs à faible complexité pour le codage de canal et le compressed sensing." Phd thesis, Université de Cergy Pontoise, 2012. http://tel.archives-ouvertes.fr/tel-00797447.
Full textAmador, Erick. "Décodeurs LDPC à faible consommation énergétique." Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00599316.
Full textDuclos-Cianci, Guillaume. "Décodeurs rapides pour codes topologiques quantiques." Mémoire, Université de Sherbrooke, 2010. http://savoirs.usherbrooke.ca/handle/11143/4868.
Full textAbassi, Oussama. "Étude des décodeurs LDPC non-binaires." Lorient, 2014. https://hal.science/tel-01176817.
Full textLéonardon, Mathieu. "Décodage de codes polaires sur des architectures programmables." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0399/document.
Full textPolar codes are a recently invented class of error-correcting codes that are of interest to both researchers and industry, as evidenced by their selection for the coding of control channels in the next generation of cellular mobile communications (5G). One of the challenges of future mobile networks is the virtualization of digital signal processing, including channel encoding and decoding algorithms. In order to improve network flexibility, these algorithms must be written in software and deployed on programmable architectures.Such a network infrastructure allow dynamic balancing of the computational effort across the network, as well as inter-cell cooperation. These techniques are designed to reduce energy consumption, increase through put and reduce communication latency. The work presented in this manuscript focuses on the software implementation of polar codes decoding algorithms and the design of programmable architectures specialized in their execution.One of the main characteristics of a mobile communication chain is that the state of communication channel changes over time. In order to address issue, adaptive modulationand coding techniques are used in communication standards. These techniques require the decoders to support a wide range of codes : they must be generic. The first contribution of this work is the software implementation of generic decoders for "List" polar decoding algorithms on general purpose processors. In addition to their genericity, the proposed decoders are also flexible. Trade-offs between correction power, throughput and decodinglatency are enabled by fine-tuning the algorithms. In addition, the throughputs of the proposed decoders achieve state-of-the-art performance and, in some cases, exceed it.The second contribution of this work is the proposal of a new high-performance programmable architecture specialized in polar code decoding. It is part of the family of Application Specific Instruction-set Processors (ASIP). The base architecture is a RISC processor. This base architecture is then configured, its instruction set is extended and dedicated hardware units are added. Simulations show that this architecture achieves through puts and latencies close to state-of-the-art software implementations on generalpurpose processors. Energy consumption is reduced by an order of magnitude. The energy required per decoded bit is about 10 nJ on general purpose processors compared to 1nJ on proposed processors when considering the Successive Cancellation (SC) decoding algorithm of a polar code (1024,512).The third contribution of this work is also the design of an ASIP architecture. It differs from the previous one by the use of an alternative design methodology. Instead of being based on a RISC architecture, the proposed processor architecture is part of the classof Transport Triggered Architectures (TTA). It is characterized by a greater modularity that allows to significantly improve the efficiency of the processor. The measured flowrates are then higher than those obtained on general purpose processors. The energy consumption is reduced to about 0.1 nJ per decoded bit for a polar code (1024,512) with the SC decoding algorithm. This corresponds to a reduction of two orders of magnitude compared to the consumption measured on general purpose processors
Maurice, Denise. "Codes correcteurs quantiques pouvant se décoder itérativement." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066361/document.
Full textQuantum information is a developping field of study with various applications (in cryptography, fast computing, ...). Its basic element, the qubit, is volatile : any measurement changes its value. This also applies to unvolontary measurements due to an imperfect insulation (as seen in any practical setting). Unless we can detect and correct these modifications, any quantum computation is bound to fail. These unwanted modifications remind us of errors that can happen in the transmission of a (classical) message. These errors can be accounted for with an error-Correcting code. For quantum errors, we need to set quantum error-Correcting codes. In order to prevent the clotting of errors that cannot be compensated, these quantum error-Correcting codes need to be both efficient and fast. Among classical error-Correcting codes, Low Density Parity Check (LDPC) codes provide many perks: They are easy to create, fast to decode (with an iterative decoding algorithme, known as belief propagation) and close to optimal. Their quantum equivalents should then be good candidates, even if they present two major drawbacks (among other less important ones). A quantum error correction code can be seen as a combination of two classical codes, with orthogonal parity-Check matrices. The first issue is the building of two efficient codes with this property. The other is in the decoding: each row of the parity-Check matrix from one code gives a low-Weight codeword of the other code. In fact, with quantum codes, corresponding errors do no affect the system, but are difficult to account for with the usual iterative decoding algorithm. In the first place, this thesis studies an existing construction, based on the product of two classical codes. This construction has good theoritical properties (dimension and minimal distance), but has shown disappointing practical results, which are explained by the resulting code's structure. Several variations, which could have good theoritical properties are also analyzed but produce no usable results at this time. We then move to the study of q-Ary codes. This construction, derived from classical codes, is the enlargement of an existing LDPC code through the augmentation of its alphabet. It applies to any 2-Regular quantum code (meaning with parity-Check matrices that have exactly two ones per column) and gives good performance with the well-Known toric code, which can be easily decoded with its own specific algorithm (but not that easily with the usual belief-Propagation algorithm). Finally this thesis explores a quantum equivalent of spatially coupled codes, an idea also derived from the classical field, where it greatly enhances the performance of LDPC codes. A result which has been proven. If, in its quantum form, a proof is still not derived, some spatially-Coupled constructions have lead to excellent performance, well beyond other recent constuctions
Zahabi, Mohammad Reza. "Analog approaches in digital receivers." Limoges, 2008. https://aurore.unilim.fr/theses/nxfile/default/42bc3667-aba8-4a87-9fbc-b35358105335/blobholder:0/2008LIMO4009.pdf.
Full textCette thèse propose d’utiliser des circuits analogiques pour réaliser des algorithmes numériques. Le but étant de diminuer la complexité et la puissance consommée et augmenter la vitesse. Deux applications gourmandes en temps de calcul ont été considérées dans cette thèse : le décodeur et le filtre RIF. On propose une structure analogique CMOS très efficace pour un décodeur Viterbi et pour un décodeur sur les graphes de Tanner. Les structures proposées ont été implantées et testées sous l’outil Cadence et démontre la validité de notre démarche. Quant au traitement de signal à l’entrée de décodeurs, un filtre RIF programmable utilisant la technologie CMOS a été étudié, conçu et implanté. La structure proposée est bien adapté aux systèmes de communications haut-débits. Le filtre possède une entrée analogique et une sortie échantillonnée, basée sur un simple inverseur CMOS et peut donc être intégré de manière efficace avec les parties numériques sur une seule puce
Takam, tchendjou Ghislain. "Contrôle des performances et conciliation d’erreurs dans les décodeurs d’image." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT107/document.
Full textThis thesis deals with the development and implementation of error detection and correction algorithms in images, in order to control the quality of produced images at the output of digital decoders. To achieve the objectives of this work, we first study the state-of the-art of the existing approaches. Examination of classically used approaches justified the study of a set of objective methods for evaluating the visual quality of images, based on machine learning methods. These algorithms take as inputs a set of characteristics or metrics extracted from the images. Depending on the characteristics extracted from the images, and the availability or not of a reference image, two kinds of objective evaluation methods have been developed: the first based on full reference metrics, and the second based on no-reference metrics; both of them with non-specific distortions. In addition to these objective evaluation methods, a method of evaluating and improving the quality of the images based on the detection and correction of the defective pixels in the images has been implemented. The proposed results have contributed to refining visual image quality assessment methods as well as the construction of objective algorithms for detecting and correcting defective pixels compared to the various currently used methods. An implementation on an FPGA has been carried out to integrate the models with the best performances during the simulation phase
Gorin, Jérôme. "Machine virtuelle universelle pour codage vidéo reconfigurable." Phd thesis, Institut National des Télécommunications, 2011. http://tel.archives-ouvertes.fr/tel-00997683.
Full textLiu, Haisheng. "Contributions à la maîtrise de la consommation dans des turbo-décodeurs." Télécom Bretagne, 2009. http://www.theses.fr/2009TELB0106.
Full textPignoly, Vincent. "Étude de codes LDPC pour applications spatiales optiques et conception des décodeurs associés." Thesis, Bordeaux, 2021. http://www.theses.fr/2021BORD0025.
Full textDigital communication systems are everywhere in our daily life. The evolution of needs implies the research and development of innovative solutions for future communication systems. Considering space digital communications, most satellites use radiofrequency links to communicate with the Earth. To minimize bandwith usage and increase throughputs, digital communication technologies based on optical links represent an interesting alternative. However, luminous energy is absorbed by particules that are present in the Earth's atmosphere. These perturbations implies new issues and new coding schemes must be developed to cope with them.LDPC codes are an error correction code family. Their performance near Shannon's limit makes them an attractive solution for digital communication systems. They have been selected in Wifi and 5G standards to achieve very high throughputs (several Gbps). They were also adopted in CSSDS and DVB-S2 standards for space applications.This thesis is about the study and the hardware implementation of coding schemes applied on optical links for space digital communication systems. The first contribution is the study of a coding scheme for an optical downlink with a soft input decoder on Earth. In this study, we developed a hardware architecture capable of implementing the decoding process on FPGA. The designed decoder reaches the expected throughput of 10 Gbps. A second contribution is about the optical uplink that implies hard input decoding in a satellite. Resulting constraints led us to rethink extended Gallager B algorithm. It made possible the develoment of a new architecture that manages the hard input decoding process efficiently while being compliant with space constraints, such as hardware complexity, heat dissipation and throughput (10 Gbps)
Li, Erbao. "Décodeurs Haute Performance et Faible Complexité pour les codes LDPC Binaires et Non-Binaires." Phd thesis, Université de Cergy Pontoise, 2012. http://tel.archives-ouvertes.fr/tel-00806192.
Full textDelomier, Yann. "Conception et prototypage de décodeurs de codes correcteurs d’erreurs à partir de modèles comportementaux." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0047.
Full textDigital communications are ubiquitous in the communicating objects of everyday life. Evolving communication standards, shorter time-to-market, and heterogeneous applications make the design for digital circuit more challenging. Fifth generation (5G) mobile technologies are an illustration of the current and future challenges. In this context, the design of digital architectures for the implementation of error-correcting code decoders will often turn out to be especially difficult. High Level Synthesis (HLS) is one of the computer-aided design (CAO) methodologies that facilitates the fast prototyping of digital architectures. This methodology is based on behavioral descriptions to generate hardware architectures. However, the design of efficient behavioral models is essential for the generation of high-performance architectures. The results presented in this thesis focus on the definition of efficient behavioral models for the generation of error-correcting code decoder architectures dedicated tp LDPC codes and polar codes. These two families of error-correcting codes are the ones adopted in the 5G standard. The proposed behavioural models have to combine flexibility, fast prototyping and efficiency.A first significant contribution of the research thesis is the proposal of two behavioural models that enables the generation of efficient hardware architectures for the decoding of LDPC codes. These models are generic. They are associated with a flexible methodology. They make the space exploration of architectural solutions easier. Thus, a variety of trade-offs between throughput, latency and hardware complexity are obtained. Furthermore, this contribution represents a significant advance in the state of the art regarding the automatic generation of LDPC decoder architectures. Finally, the performances that are achieved by generated architectures are similar to that of handwritten architectures with an usual CAO methodology.A second contribution of this research thesis is the proposal of a first behavioural model dedicated to the generation of hardware architectures of polar code decoders with a high-level synthesis methodology. This generic model also enables an efficient exploration of the architectural solution space. It should be noted that the performance of synthesized polar decoders is similar to that of state-of-the-art polar decoding architectures.A third contribution of the research thesis concerns the definition of a polar decoder behavioural model that is based on a "list" algorithm, known as successive cancellation list decoding algorithm. This decoding algorithm enables to achieve higher decoding performance at the cost of a significant computational overhead. This additional cost can also be observed on the hardware complexity of the resulting decoding architecture. It should be emphasized that the proposed behavioural model is the first model for polar code decoders based on a "list" algorithm
Ben, Hadj Fredj Abir. "Computations for the multiple access in wireless networks." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT030.
Full textFuture generations of wireless networks pose many challenges for the research community. In particular, these networks must be able to respond, with a certain quality of service, to the demands of a large number of connected people and objects. This drives us into quite important requirements in terms of capacity. It is within this framework that non-orthogonal multiple access methods (NOMA) have been introduced. In this thesis, we have studied and proposed a multiple access method based on the compute and forward technique and on Lattice codes while considering different lattice constructions. We have also proposed improvements to the algorithm for decoding the Sparse code multiple access (SCMA) method based on Lattice codes. In order to simplify the multi-stage decoders used in here, we have proposed simplified expressions of LLRs as well as approximations. Finally, we studied the construction D of lattices using polar codes. This thesis was in collaboration with the research center of Huawei France
Coullomb, Alexis. "Développement de substrats actifs et d'une méthode d'analyse de FRET quantitative pour décoder la mécanotransduction." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAY044/document.
Full textLiving cells can react to mechanical signals such as the rigidity of the surface they adhere on, the traction or compression forces applied on them, the liquid flow at their membrane surface or the geometry of their adhesions or of their overall shape. Those signals influence cellular processes such as proliferation, differentiation, migration or cell death. Those processes are tightly regulated by biochemical reactions that constitute a signaling network. Mechanotransduction is the translation of the mechanical signal into the biochemical one.In order to study mechanotransduction, we have considered the use of ultrasounds to mechanically stimulate cells at relatively high temporal and spatial frequencies. Numerous setups and options have been considered in this very exploratory project. Finally, we will retain some promising leads for the continuation of this project.We have developed what we call active substrates that allows us to control both spatially and temporally the mechanical stimulation on living cells. Those active substrates consist of iron micropillars embedded in a soft elastomer and actuated by 2 electromagnets. We can control dynamically the displacement of the pillar that will deform locally and continuously the surface. This deformation will then deform in traction or in compression the living cells spread on the surface nearby. Thanks to fluorescent trackers we can perform Traction Force Microscopy and monitor the stress applied by the pillars to the cells through the PDMS surface, and we can look at the mechanical response of the cells. Moreover, those substrates are compatible with live cell fluorescence microscopy, which makes possible the observation of the cellular response at the morphological level (focal adhesions, protrusive activity, …) and most importantly at the biochemical level.Indeed, in order to study the cellular biochemical response after a mechanical stimulation, we use fluorescence microscopy to observe biosensors containing pairs of donor/acceptor fluorophores. Those biosensors allow us to monitor the activity of proteins implied in cellular signaling by computing the Förster Resonance Energy Transfer (FRET) efficiency of those biosensors. To do so, samples are alternatively excited at donor and acceptor excitation wavelengths. The fluorescence signal is then simultaneously measured in donor and acceptor emission channels. A substantial part of my thesis has been dedicated to the development of a quantitative method to analyze fluorescence images in order to measure FRET efficiencies that do not depend on experimental factors or biosensors concentration in cells. We assess different methods to compute standard correction factors that account for spectral bleed-through and direct excitation of acceptors at donor excitation wavelength. To obtain more quantitative measurements, we have developed a new method to compute 2 additional correction factors. We compare this method with the only one preexisting, and we assess the influence of image processing parameters on FRET efficiency values
Larregue, Julien. "Décoder la génétique du crime : développement, structure et enjeux de la criminologie biosociale aux États-Unis." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0134/document.
Full textWhile it has long been marginalized in criminology, the investigation of biological factors of crime has known a renaissance in the United States since the 2000s under the name of “biosocial criminology”. The development of this movement, that goes back to the 1960s, owes much to the progressive emancipation of the criminological discipline vis-à-vis sociology, as well as to social scientists’ growing access to the methods and data of behavior genetics. Although biosocial criminology is not homogeneous, it is primarily produced by academics that occupya dominated position within the criminological field and that use the genetics of crime as a tool for subverting the sociological domination. The development of biosocial criminology is far from having gained consensus among US criminologists. Rather than trying to normalize controversies by convincing their opponents of their works’ relevance, the most subversive leaders of biosocial criminology adopt a polemical stance and a combative posture and use their heterodoxy to acquire a greater visibility within the field. Others, on the other hand, seek to keep a low profile and avoid engaging in controversies. This carefulness is particularly visible regarding the treatment of the racial question, for numerous researchers avoid tying biosocial criminology up with a research theme as politically sensitive. However, the subversive minority uses the controversial aspect of the racial question as an example of the censorship that dominant sociologists supposedly impose within the field
Hentati, Manel. "Reconfiguration dynamique partielle de décodeurs vidéo sur plateformes FPGA par une approche méthodologique RVC (Reconfigurable Video Coding)." Rennes, INSA, 2012. http://www.theses.fr/2012ISAR0027.
Full textThe main purpose of this PhD is to contribute to the design and the implementation of a reconfigurable decoder using MPEGRVC standard. The standard MPEG-RVC is developed by MPEG. Lt aims at providing a unified high-level specification of current and future MPEG video coding technologies by using dataflow model named RVC-CAL. This standard offers the means to overcome the lack of interpretability between many video codecs deployed in the market. Ln this work, we propose a rapid prototyping methodology to provide an efficient and optimized implementation of RVC decoders in target hardware. Our design flow is based on using the dynamic partial reconfiguration (DPR) to validate reconfiguration approaches allowed by the MPEG-RVC. By using DPR technique, hardware module can be replaced by another one which has the same function or the same algorithm but a different architecture. This concept allows to the designer to configure various decoders according to the data inputs or her requirements (latency, speed, power consumption,. . ). The use of the MPEG-RVC and the DPR improves the development process and the decoder performance. But, DPR poses several problems such as the placement of tasks and the fragmentation of the FPGA area. These problems have an influence on the application performance. Therefore, we need to define methods for placement of hardware tasks on the FPGA. Ln this work, we propose an off-line placement approach which is based on using linear programming strategy to find the optimal placement of hardware tasks and to minimize the resource utilization. Application of different data combinations and a comparison with sate-of-the art method show the high performance of the proposed approach
Chen, Jinyuan. "Communication au sein d'un canal de broadcast avec feedback limité et retardé : limites fondamentales, nouveaux encodeurs et décodeurs." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0035/document.
Full textIn many multiuser wireless communications scenarios, good feedback is a crucial ingredient that facilitates improved performance. While being useful, perfect feedback is also hard and time-consuming to obtain. With this challenge as a starting point, the main work of thesis seeks to address the simple yet elusive and fundamental question of ``HOW MUCH QUALITY of feedback, AND WHEN, must one send to achieve a certain degrees-of-freedom (DoF) performance''. The work manages to concisely describe the DoF region in a very broad setting corresponding to a general feedback process that, at any point in time, may or may not provide channel state information at the transmitter (CSIT) - of some arbitrary quality - for any past, current or future channel (fading) realization. Under standard assumptions, and under the assumption of sufficiently good delayed CSIT, the work concisely captures the effect of the quality of CSIT offered at any time, about any channel. This was achieved for the two user MISO-BC, and was then immediately extended to the MIMO BC and MIMO IC settings. Further work also considers different aspects of communicating with limited feedback, such as the aspect of global CSI at receivers, and the aspect of diversity. In addition to the theoretical limits and novel encoders and decoders, the work applies towards gaining insights on many practical questions