Contents
Academic literature on the topic 'Compression (Télécommunications)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Compression (Télécommunications).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Dissertations / Theses on the topic "Compression (Télécommunications)"
Sans, Thierry. "Beyond access control - specifying and deploying security policies in information systems." Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0040.
Full textMultimedia streaming services in their traditional design require high performance from the network (high bandwidth, low error rates and delay), which is in contradiction with the resource constrains that appear in wireless networks (limited bandwidth, error-prone channels and varying network conditions). In this thesis, we study the hypothesis that this harsh environment with severe resource constraints requires application-specific architectures, rather than general-purpose protocols, to increase the resource usage efficiency. We consider case studies on wireless multicast video streaming. The first study evaluates the performance of ROHC and UDP-Lite. We found that bandwidth usage is improved because packet loss rate is decreased by the packet size reduction achieved by ROHC and the less strict integrity verification policy implemented by UDP-Lite. The second and third studies consider the case where users join a unidirectional common channel at random times to receive video streaming. After joining the transmission, the user have to wait to receive both, video and header compression contexts, to be able to play the multimedia application. This start up delay will depend on the user access time and the initialization and refresh of video and header compression contexts periodicity. "Top-down" cross layer approaches were developed to adapt header compression behavior to video compression. These studies show that application-specific protocol architectures achieve the bandwidth usage, error robustness and delay to start video reproduction needed for wireless networks
Al-Rababa'a, Ahmad. "Arithmetic bit recycling data compression." Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26759.
Full textLa compression des données est la technique informatique qui vise à réduire la taille de l'information pour minimiser l'espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d'un problème que nous appelons la redondance causée par la multiplicité d'encodages. La multiplicité d'encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu'une technique de compression a la possibilité, au cours du processus d'encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d'environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n'a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c'est-à-dire qu'elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s'appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu'ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d'améliorer l'efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s'appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l'adaptation du HuBR à l'ACBR. Nous présentons aussi l'analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l'aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c'est qu'elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l'utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d'obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d'obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d'un dictionnaire pluriellement analysable. En outre, nous montrons qu'ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l'algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d'ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l'algorithme de Knuth.
Data compression aims to reduce the size of data so that it requires less storage space and less communication channels bandwidth. Many compression techniques (such as LZ77 and its variants) suffer from a problem that we call the redundancy caused by the multiplicity of encodings. The Multiplicity of Encodings (ME) means that the source data may be encoded in more than one way. In its simplest case, it occurs when a compression technique with ME has the opportunity at certain steps, during the encoding process, to encode the same symbol in different ways. The Bit Recycling compression technique has been introduced by D. Dubé and V. Beaudoin to minimize the redundancy caused by ME. Variants of bit recycling have been applied on LZ77 and the experimental results showed that bit recycling achieved better compression (a reduction of about 9% in the size of files that have been compressed by Gzip) by exploiting ME. Dubé and Beaudoin have pointed out that their technique could not minimize the redundancy caused by ME perfectly since it is built on Huffman coding, which does not have the ability to deal with codewords of fractional lengths; i.e. it is constrained to generating codewords of integral lengths. Moreover, Huffman-based Bit Recycling (HuBR) has imposed an additional burden to avoid some situations that affect its performance negatively. Unlike Huffman coding, Arithmetic Coding (AC) can manipulate codewords of fractional lengths. Furthermore, it has attracted researchers in the last few decades since it is more powerful and flexible than Huffman coding. Accordingly, this work aims to address the problem of adapting bit recycling to arithmetic coding in order to improve the code effciency and the flexibility of HuBR. We addressed this problem through our four (published) contributions. These contributions are presented in this thesis and can be summarized as follows. Firstly, we propose a new scheme for adapting HuBR to AC. The proposed scheme, named Arithmetic-Coding-based Bit Recycling (ACBR), describes the framework and the principle of adapting HuBR to AC. We also present the necessary theoretical analysis that is required to estimate the average amount of redundancy that can be removed by HuBR and ACBR in the applications that suffer from ME, which shows that ACBR achieves perfect recycling in all cases whereas HuBR achieves perfect recycling only in very specific cases. Secondly, the problem of the aforementioned ACBR scheme is that it uses arbitrary-precision calculations, which requires unbounded (or infinite) resources. Hence, in order to benefit from ACBR in practice, we propose a new finite-precision version of the ACBR scheme, which makes it efficiently applicable on computers with conventional fixed-sized registers and can be easily interfaced with the applications that suffer from ME. Thirdly, we propose the use of both techniques (HuBR and ACBR) as the means to reduce the redundancy in plurally parsable dictionaries that are used to obtain a binary variable-to-fixed length code. We theoretically and experimentally show that both techniques achieve a significant improvement (less redundancy) in this respect, but ACBR outperforms HuBR and provides a wider class of binary sources that may benefit from a plurally parsable dictionary. Moreover, we show that ACBR is more flexible than HuBR in practice. Fourthly, we use HuBR to reduce the redundancy of the balanced codes generated by Knuth's algorithm. In order to compare the performance of HuBR and ACBR, the corresponding theoretical results and analysis of HuBR and ACBR are presented. The results show that both techniques achieved almost the same significant reduction in the redundancy of the balanced codes generated by Knuth's algorithm.
Baskurt, Atilla. "Compression d'images numériques par la transformation cosinus discrète." Lyon, INSA, 1989. http://www.theses.fr/1989ISAL0036.
Full textPiana, Thibault. "Étude et développement de techniques de compression pour les signaux de télécommunications en bande de base." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0424.
Full textThis thesis investigates signal compression to enhance bandwidth efficiency in satellite communications, focusing on Cloud Radio Access Network (C-RAN) architectures for the ground segment. Conducted under a CIFRE contract with Safran Data Systems and supervised by Lab-STICC, this research explores the techniques of RF baseband signal compression, crucial for maintaining high fidelity of data transmitted between terrestrial stations and satellites. The study leverages advances such as Ground Station as a Service (GSaaS), which facilitates optimized resource management. It specifically addresses the challenges associated with efficiently compressing the wide bandwidths used, which requires innovative techniques to reduce the load on terrestrial networks without compromising signal quality. Lossless and lossy compression methods are evaluated, with particular emphasis on dictionary-based compression techniques for their efficiency in sparsity, and predictive methods for their ability to minimize discrepancies between predicted and actual values. This research demonstrates significant potential to improve data management in satellite communications, offering viable solutions for current and future data compression challenges
Madre, Guillaume. "Application de la transformée en nombres entiers à l'étude et au développement d'un codeur de parole pour transmission sur réseaux IP." Brest, 2004. http://www.theses.fr/2004BRES2036.
Full textOur study considers the vocal signals compression for the transmission of Voice over Internet Protocol (VoIP). The prospects being the implementation of a telephony IP application, the work provides the first elements for a real-time speech coding system and its integration to a DSP. They are concentrated on the speech CS-ACELP (Conjugate Structure- Algebraic Code-Excited Linear Prediction) G. 729 coder, retained among the International Telecommunications Union (ITU) recommendations and already recognized for its low implementation complexity. The main aspect was to improve its performances and to decrease its computational cost, while maintaining the compromise between the coding quality and the required complexity. To reduce the computational cost of this coder, we looked further into the mathematical bases of the Number Theoretic Transform (NTT) which is brought to find more and more various applications in signal processing. We introduced more particularly the Fermat Number Transform (FNT) which is well suited for digital processing operations. Its application to different coding algorithms allows an important reduction of the computational complexity. Thus, the development of new efficient algorithms, for the Linear Prediction (LP) of the speech signal and the excitation modeling, has allowed a modification of the G. 729 coder and his implementation on a fixed-point processor. Moreover, a new function of Voice Activity Detection (VAD) has carried out the implementation of one more efficient procedure for silences compression and the reduction of the transmission rate
Prat, Sylvain. "Compression et visualisation de grandes scènes 3D par représentation à base topologique." Rennes 1, 2007. http://www.theses.fr/2007REN1S039.
Full textIf visualization applications of 3D scenes are now widespread since the advent of powerful 3D graphics cards allowing real-time 3D rendering, the rendering of large 3D scenes remains problematic because of the too many 3D primitives to handle during the short period of time elapsing between two successive image frames. On the other hand, the democratisation of the Internet at home creates new needs, including the need to navigate into large 3D environments through a network. Applications are indeed numerous: video games, virtual tourism, geolocalization (GPS), virtual architecture, 3d medical imaging. . . Within this context, users exchange and share information regarding the virtual environment over the network. The subject of this thesis is in-between these two issues. We are interested in the special case of viewing virtual cities in a "communicating" fashion where 3D data pass over the network. This type of application raises two major problems: first, the selection of relevant data to be transmitted to the user, because it's too expensive to transmit the full 3D environment before the navigation starts; on the other hand, taking into account the network constraints: low bandwidth, latency, error tolerance. In this thesis, we propose several contributions: a generic compression method, suitable for the compression of volume meshes, a method for constructing and partitioning 3D urban scenes from buildings footprints, and an optimized method for building complex surfaces from a polygon soup
Malinowski, Simon. "Codes joints source-canal pour transmission robuste sur canaux mobiles." Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/malinowski.pdf.
Full textJoint source-channel coding has been an area of recent research activity. This is due in particular to the limits of Shannon's separation theorem, which states that source and channel coding can be performed separately in order to reach optimality. Over the last decade, various works have considered performing these operations jointly. Source codes have hence been deeply studied. In this thesis, we have worked with these two kind of codes in the joint source-channel coding context. A state model for soft decoding of variable length and quasi-arithmetic codes is proposed. This state model is parameterized by an integer T that controls a trade-off between decoding performance and complexity. The performance of these source codes on the aggregated state model is then analyzed together with their resynchronisation properties. It is hence possible to foresee the performance of a given code with respect to the aggregation parameter T. A robust decoding scheme exploiting side information is then presented. The extra redundancy is under the form of partial length constraints at different time instants of the decoding process. Finally, two different distributed coding schemes based on quasi-arithmetic codes are proposed. The first one is based on puncturing the output of the quasi-arithmetic bit-stream, while the second uses a new kind of codes : overlapped quasi-arithmetic codes. The decoding performance of these schemes is very competitive compared to classical techniques using channel codes
Ouled, Zaid Azza. "Amélioration des performances des systèmes de compression JPEG et JPEG2000." Poitiers, 2002. http://www.theses.fr/2002POIT2294.
Full textBarland, Rémi. "Évaluation objective sans référence de la qualité perçue : applications aux images et vidéos compressées." Nantes, 2007. http://www.theses.fr/2007NANT2028.
Full textThe conversion to the all-digital and the development of multimedia communications produce an ever-increasing flow of information. This massive increase in the quantity of data exchanged generates a progressive saturation of the transmission networks. To deal with this situation, the compression standards seek to exploit more and more the spatial and/or temporal correlation to reduce the bit rate. The reduction of the resulting information creates visual artefacts which can deteriorate the visual content of the scene and thus cause troubles for the end-user. In order to propose the best broadcasting service, the assessment of the perceived quality is then necessary. The subjective tests which represent the reference method to quantify the perception of distortions are expensive, difficult to implement and remain inappropriate for an on-line quality assessment. In this thesis, we are interested in the most used compression standards (image or video) and have designed no-reference quality metrics based on the exploitation of the most annoying visual artefacts, such as the blocking, blurring and ringing effects. The proposed approach is modular and adapts to the considered coder and to the required ratio between computational cost and performance. For a low complexity, the metric quantifies the distortions specific to the considered coder, only exploiting the properties of the image signal. To improve the performance, to the detriment of a certain complexity, this one integrates in addition, cognitive models simulating the mechanisms of the visual attention. The saliency maps generated are then used to refine the proposed distortion measures purely based on the image signal
Jégou, Hervé. "Codes robustes et codes joints source-canal pour transmission multimédia sur canaux mobiles." Rennes 1, 2005. https://tel.archives-ouvertes.fr/tel-01171129.
Full textBooks on the topic "Compression (Télécommunications)"
Luxereau, François. Compression du signal audiovisuel: Conserver l'information et réduire le débit des données. Paris: Dunod, 2008.
Find full textRamamohan, Rao K., and Yip P. C, eds. The Transform and Data Compression Handbook. Boca Raton, Fla: CRC Press, 2009.
Find full textD, Barni Mauro Ph, ed. Document and image compression. Boca Raton, FL: CRC/Taylor & Francis, 2006.
Find full textRamamohan, Rao K., and P. C. Yip. Transform and Data Compression Handbook. Taylor & Francis Group, 2010.
Find full textBarni, Mauro, and Barni Mauro. Document and Image Compression. Taylor & Francis Group, 2010.
Find full textBell, Timothy C., Ian H. Witten, and Alistair Moffat. Managing Gigabytes: Compressing and Indexing Documents and Images, Second Edition. Elsevier Science & Technology Books, 1999.
Find full text