Dissertations / Theses on the topic 'Compression (Télécommunications)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Compression (Télécommunications).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Sans, Thierry. "Beyond access control - specifying and deploying security policies in information systems." Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0040.
Full textMultimedia streaming services in their traditional design require high performance from the network (high bandwidth, low error rates and delay), which is in contradiction with the resource constrains that appear in wireless networks (limited bandwidth, error-prone channels and varying network conditions). In this thesis, we study the hypothesis that this harsh environment with severe resource constraints requires application-specific architectures, rather than general-purpose protocols, to increase the resource usage efficiency. We consider case studies on wireless multicast video streaming. The first study evaluates the performance of ROHC and UDP-Lite. We found that bandwidth usage is improved because packet loss rate is decreased by the packet size reduction achieved by ROHC and the less strict integrity verification policy implemented by UDP-Lite. The second and third studies consider the case where users join a unidirectional common channel at random times to receive video streaming. After joining the transmission, the user have to wait to receive both, video and header compression contexts, to be able to play the multimedia application. This start up delay will depend on the user access time and the initialization and refresh of video and header compression contexts periodicity. "Top-down" cross layer approaches were developed to adapt header compression behavior to video compression. These studies show that application-specific protocol architectures achieve the bandwidth usage, error robustness and delay to start video reproduction needed for wireless networks
Al-Rababa'a, Ahmad. "Arithmetic bit recycling data compression." Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26759.
Full textLa compression des données est la technique informatique qui vise à réduire la taille de l'information pour minimiser l'espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d'un problème que nous appelons la redondance causée par la multiplicité d'encodages. La multiplicité d'encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu'une technique de compression a la possibilité, au cours du processus d'encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d'environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n'a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c'est-à-dire qu'elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s'appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu'ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d'améliorer l'efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s'appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l'adaptation du HuBR à l'ACBR. Nous présentons aussi l'analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l'aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c'est qu'elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l'utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d'obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d'obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d'un dictionnaire pluriellement analysable. En outre, nous montrons qu'ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l'algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d'ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l'algorithme de Knuth.
Data compression aims to reduce the size of data so that it requires less storage space and less communication channels bandwidth. Many compression techniques (such as LZ77 and its variants) suffer from a problem that we call the redundancy caused by the multiplicity of encodings. The Multiplicity of Encodings (ME) means that the source data may be encoded in more than one way. In its simplest case, it occurs when a compression technique with ME has the opportunity at certain steps, during the encoding process, to encode the same symbol in different ways. The Bit Recycling compression technique has been introduced by D. Dubé and V. Beaudoin to minimize the redundancy caused by ME. Variants of bit recycling have been applied on LZ77 and the experimental results showed that bit recycling achieved better compression (a reduction of about 9% in the size of files that have been compressed by Gzip) by exploiting ME. Dubé and Beaudoin have pointed out that their technique could not minimize the redundancy caused by ME perfectly since it is built on Huffman coding, which does not have the ability to deal with codewords of fractional lengths; i.e. it is constrained to generating codewords of integral lengths. Moreover, Huffman-based Bit Recycling (HuBR) has imposed an additional burden to avoid some situations that affect its performance negatively. Unlike Huffman coding, Arithmetic Coding (AC) can manipulate codewords of fractional lengths. Furthermore, it has attracted researchers in the last few decades since it is more powerful and flexible than Huffman coding. Accordingly, this work aims to address the problem of adapting bit recycling to arithmetic coding in order to improve the code effciency and the flexibility of HuBR. We addressed this problem through our four (published) contributions. These contributions are presented in this thesis and can be summarized as follows. Firstly, we propose a new scheme for adapting HuBR to AC. The proposed scheme, named Arithmetic-Coding-based Bit Recycling (ACBR), describes the framework and the principle of adapting HuBR to AC. We also present the necessary theoretical analysis that is required to estimate the average amount of redundancy that can be removed by HuBR and ACBR in the applications that suffer from ME, which shows that ACBR achieves perfect recycling in all cases whereas HuBR achieves perfect recycling only in very specific cases. Secondly, the problem of the aforementioned ACBR scheme is that it uses arbitrary-precision calculations, which requires unbounded (or infinite) resources. Hence, in order to benefit from ACBR in practice, we propose a new finite-precision version of the ACBR scheme, which makes it efficiently applicable on computers with conventional fixed-sized registers and can be easily interfaced with the applications that suffer from ME. Thirdly, we propose the use of both techniques (HuBR and ACBR) as the means to reduce the redundancy in plurally parsable dictionaries that are used to obtain a binary variable-to-fixed length code. We theoretically and experimentally show that both techniques achieve a significant improvement (less redundancy) in this respect, but ACBR outperforms HuBR and provides a wider class of binary sources that may benefit from a plurally parsable dictionary. Moreover, we show that ACBR is more flexible than HuBR in practice. Fourthly, we use HuBR to reduce the redundancy of the balanced codes generated by Knuth's algorithm. In order to compare the performance of HuBR and ACBR, the corresponding theoretical results and analysis of HuBR and ACBR are presented. The results show that both techniques achieved almost the same significant reduction in the redundancy of the balanced codes generated by Knuth's algorithm.
Baskurt, Atilla. "Compression d'images numériques par la transformation cosinus discrète." Lyon, INSA, 1989. http://www.theses.fr/1989ISAL0036.
Full textPiana, Thibault. "Étude et développement de techniques de compression pour les signaux de télécommunications en bande de base." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0424.
Full textThis thesis investigates signal compression to enhance bandwidth efficiency in satellite communications, focusing on Cloud Radio Access Network (C-RAN) architectures for the ground segment. Conducted under a CIFRE contract with Safran Data Systems and supervised by Lab-STICC, this research explores the techniques of RF baseband signal compression, crucial for maintaining high fidelity of data transmitted between terrestrial stations and satellites. The study leverages advances such as Ground Station as a Service (GSaaS), which facilitates optimized resource management. It specifically addresses the challenges associated with efficiently compressing the wide bandwidths used, which requires innovative techniques to reduce the load on terrestrial networks without compromising signal quality. Lossless and lossy compression methods are evaluated, with particular emphasis on dictionary-based compression techniques for their efficiency in sparsity, and predictive methods for their ability to minimize discrepancies between predicted and actual values. This research demonstrates significant potential to improve data management in satellite communications, offering viable solutions for current and future data compression challenges
Madre, Guillaume. "Application de la transformée en nombres entiers à l'étude et au développement d'un codeur de parole pour transmission sur réseaux IP." Brest, 2004. http://www.theses.fr/2004BRES2036.
Full textOur study considers the vocal signals compression for the transmission of Voice over Internet Protocol (VoIP). The prospects being the implementation of a telephony IP application, the work provides the first elements for a real-time speech coding system and its integration to a DSP. They are concentrated on the speech CS-ACELP (Conjugate Structure- Algebraic Code-Excited Linear Prediction) G. 729 coder, retained among the International Telecommunications Union (ITU) recommendations and already recognized for its low implementation complexity. The main aspect was to improve its performances and to decrease its computational cost, while maintaining the compromise between the coding quality and the required complexity. To reduce the computational cost of this coder, we looked further into the mathematical bases of the Number Theoretic Transform (NTT) which is brought to find more and more various applications in signal processing. We introduced more particularly the Fermat Number Transform (FNT) which is well suited for digital processing operations. Its application to different coding algorithms allows an important reduction of the computational complexity. Thus, the development of new efficient algorithms, for the Linear Prediction (LP) of the speech signal and the excitation modeling, has allowed a modification of the G. 729 coder and his implementation on a fixed-point processor. Moreover, a new function of Voice Activity Detection (VAD) has carried out the implementation of one more efficient procedure for silences compression and the reduction of the transmission rate
Prat, Sylvain. "Compression et visualisation de grandes scènes 3D par représentation à base topologique." Rennes 1, 2007. http://www.theses.fr/2007REN1S039.
Full textIf visualization applications of 3D scenes are now widespread since the advent of powerful 3D graphics cards allowing real-time 3D rendering, the rendering of large 3D scenes remains problematic because of the too many 3D primitives to handle during the short period of time elapsing between two successive image frames. On the other hand, the democratisation of the Internet at home creates new needs, including the need to navigate into large 3D environments through a network. Applications are indeed numerous: video games, virtual tourism, geolocalization (GPS), virtual architecture, 3d medical imaging. . . Within this context, users exchange and share information regarding the virtual environment over the network. The subject of this thesis is in-between these two issues. We are interested in the special case of viewing virtual cities in a "communicating" fashion where 3D data pass over the network. This type of application raises two major problems: first, the selection of relevant data to be transmitted to the user, because it's too expensive to transmit the full 3D environment before the navigation starts; on the other hand, taking into account the network constraints: low bandwidth, latency, error tolerance. In this thesis, we propose several contributions: a generic compression method, suitable for the compression of volume meshes, a method for constructing and partitioning 3D urban scenes from buildings footprints, and an optimized method for building complex surfaces from a polygon soup
Malinowski, Simon. "Codes joints source-canal pour transmission robuste sur canaux mobiles." Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/malinowski.pdf.
Full textJoint source-channel coding has been an area of recent research activity. This is due in particular to the limits of Shannon's separation theorem, which states that source and channel coding can be performed separately in order to reach optimality. Over the last decade, various works have considered performing these operations jointly. Source codes have hence been deeply studied. In this thesis, we have worked with these two kind of codes in the joint source-channel coding context. A state model for soft decoding of variable length and quasi-arithmetic codes is proposed. This state model is parameterized by an integer T that controls a trade-off between decoding performance and complexity. The performance of these source codes on the aggregated state model is then analyzed together with their resynchronisation properties. It is hence possible to foresee the performance of a given code with respect to the aggregation parameter T. A robust decoding scheme exploiting side information is then presented. The extra redundancy is under the form of partial length constraints at different time instants of the decoding process. Finally, two different distributed coding schemes based on quasi-arithmetic codes are proposed. The first one is based on puncturing the output of the quasi-arithmetic bit-stream, while the second uses a new kind of codes : overlapped quasi-arithmetic codes. The decoding performance of these schemes is very competitive compared to classical techniques using channel codes
Ouled, Zaid Azza. "Amélioration des performances des systèmes de compression JPEG et JPEG2000." Poitiers, 2002. http://www.theses.fr/2002POIT2294.
Full textBarland, Rémi. "Évaluation objective sans référence de la qualité perçue : applications aux images et vidéos compressées." Nantes, 2007. http://www.theses.fr/2007NANT2028.
Full textThe conversion to the all-digital and the development of multimedia communications produce an ever-increasing flow of information. This massive increase in the quantity of data exchanged generates a progressive saturation of the transmission networks. To deal with this situation, the compression standards seek to exploit more and more the spatial and/or temporal correlation to reduce the bit rate. The reduction of the resulting information creates visual artefacts which can deteriorate the visual content of the scene and thus cause troubles for the end-user. In order to propose the best broadcasting service, the assessment of the perceived quality is then necessary. The subjective tests which represent the reference method to quantify the perception of distortions are expensive, difficult to implement and remain inappropriate for an on-line quality assessment. In this thesis, we are interested in the most used compression standards (image or video) and have designed no-reference quality metrics based on the exploitation of the most annoying visual artefacts, such as the blocking, blurring and ringing effects. The proposed approach is modular and adapts to the considered coder and to the required ratio between computational cost and performance. For a low complexity, the metric quantifies the distortions specific to the considered coder, only exploiting the properties of the image signal. To improve the performance, to the detriment of a certain complexity, this one integrates in addition, cognitive models simulating the mechanisms of the visual attention. The saliency maps generated are then used to refine the proposed distortion measures purely based on the image signal
Jégou, Hervé. "Codes robustes et codes joints source-canal pour transmission multimédia sur canaux mobiles." Rennes 1, 2005. https://tel.archives-ouvertes.fr/tel-01171129.
Full textChebbo, Salim. "Méthodes à compléxite réduite pour amélioration de la qualité des séquences vidéo codées par blocks." Paris, Télécom ParisTech, 2010. http://www.theses.fr/2010ENST0026.
Full textThe objective of this thesis is to propose real time solutions in order to reduce the video compression impairments namely the blocking, the ringing and the temporal flickering. The proposed deblocking filter is mainly based on an adaptive conditional two-dimensional filter, derived from the combination in horizontal and vertical directions of a simple two-mode conditional 1-d filter. Appropriate filters are selected using the local degradation of the image, which is assessed by examining he quantization step as well as the computed spatial pixel activities. The ringing artifact reduction algorithm uses a simple classification method to differentiate at and edge blocks, which are then filtered using a particular weighted median filter. Regarding the temporal impairments, we proposed a new measure to assess the level of these impairments and accordingly estimate the temporal quality of the decoded sequences. The preliminary study of the temporal compression artifacts demonstrated that the level of the temporal fluctuation between consecutive frames is affected by the compression ratio and the group of pictures structure, notably the presence (and period) of intra coded frames in the video sequence. It was also shown that the deringing process reduces the visibility of the mosquito noise, however a temporal filtering remains necessary to reduce the background areas fluctuation. For this reason, we proposed to temporally filter these areas and skip the temporal filtering of moving objects. Finally, the implementation complexity of the proposed solutions was investigated in order to prove the applicability of these solutions for real time applications
Adjih, Cédric. "Multimédia et accès à l'Internet haut débit : l'analyse de la filière du câble." Versailles-St Quentin en Yvelines, 2001. http://www.theses.fr/2001VERS0017.
Full textBoumezzough, Ahmed. "Vers un processeur optoélectronique holographique de cryptage des données à haut débit pour les télécommunications." Université Louis Pasteur (Strasbourg) (1971-2008), 2005. http://www.theses.fr/2005STR13025.
Full textWith the development of multi-media networks and the growing exchanges of information, the scientific community is interested more and more in techniques of encryption and compression in order to make these exchanges safe and to gain memory storage capacity. Several numerical algorithms of encryption and compression were proposed. However the numerical coding of information requires significant computing times and the processing can be very expensive. This led to the consideration of optics as a solution, particularly for images. This choice is justified by the facility which optics carries out two dimensional Fourier transforms with high parallelism. This PhD thesis is built around two neighbouring themes: compression and multiplexing of information on the one hand, compression and encryption on the other hand. These two problems are tackled with the same approach which is filtering. The data are processed in the Fourier domain. As optical correlation, the segmented filter is a way to fuse information in the Fourier domain between several references. It allows extracting specific or relevant information to each or several references and rejecting the redundant information; thus, a compression operation is done. In the same manner, we have used a similar approach for encrypting operations in images
Doutsi, Effrosyni. "Compression d'images et de vidéos inspirée du fonctionnement de la rétine." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4011/document.
Full textThe goal of this thesis is to propose a novel video coding architecture which is inspired by the mammalian visual system and the retina. If one sees the retina as a machine which processes the visual stimulus, it seems an intelligent and very efficient model to mimic. There are several reasons to claim that, first of all because it consumes low power, it also deals with high resolution inputs and the dynamic way it transforms and encodes the visual stimulus is beyond the current standards. We were motivated to study and release a retina-inspired video codec. The proposed algorithm was applied to a video stream in a very simple way according to the coding standards like MJPEG or MJPEG2000. However, this way allows the reader to study and explore all the advantages of the retina dynamic processing way in terms of compression and image processing. The current performance of the retina-inspired codec is very promising according to some final results which outperform MJPEG for bitrates lo wer than 100 kbps and MPEG-2 for bitrates higher than 70 kpbs. In addition, for lower bitrates, the retina-inspired codec outlines better the content of the input scene. There are many perspectives which concern the improvement of the retina-inspired video codec which seem to lead to a groundbreaking compression architecture. Hopefully, this manuscript will be a useful tool for all the researchers who would like to study further than the perceptual capability of the mammalian visual system and understand how the structure and the functions of the retina can in practice improve the coding algorithms
Ghenania, Mohamed. "Techniques de conversion de format entre codeurs CELP normalisés : Speech coding format conversion between standardized CELP coders." Rennes 1, 2005. http://www.theses.fr/2005REN11038.
Full textMoravie, Philippe. "Parallélisation d'une méthode de compression d'images : transformée en ondelettes, quantification vectorielle et codage d'Huffman." Toulouse, INPT, 1997. http://www.theses.fr/1997INPT123H.
Full textRawat, Priyanka. "Improving efficiency of tunneling mechanisms over IP networks." Télécom Bretagne, 2010. http://www.theses.fr/2010TELB0131.
Full textOver the years, tunneling has been used to provide solution for network security problems and migration of networks (from IPv4 to IPv6). More recently tunneling has found application in mobility support solutions such as MobileIP and NEMO Basic Support that provide moving nodes or networks with a transparent Internet access. However, the use of tunneling leads to high protocol header overhead due to multiple protocol headers in each packet. This overhead is even more noticeable on wireless links, which have scarce resources, and mobile networks that are typically connected by means of low bandwidth wireless links. This makes tunneling less efficient and results in performance deterioration in wireless and mobile networks. Thus, there is a requirement to improve efficiency and performance of tunneling over IP networks. We consider header compression for protocol headers to reduce overhead due to IP tunnels. We examine the behavior of ROHC (Robust Header Compression) over long delay links and tunnels. We show that ROHC can be applied over tunnels with proper configurations. We discuss the problems encountered, in presence of tunneling, when ROHC is used over the entire encapsulation consisting of inner IP headers of the IP packet and tunneling headers. This motivates us to design and implement TuCP (Tunneling Compression Protocol), a novel header compression for tunneling protocol headers. TuCP reduces tunneling header overhead and allows an efficient use and deployment of tunneling in wireless and mobile networks. In addition, it provides a solution to manage out of order packets which enables the usage of existing header compression methods like ROHC over IP tunnels
Aklouf, Mourad. "Video for events : Compression and transport of the next generation video codec." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.
Full textThe acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
Totozafiny, Théodore. "Compression d'images couleur pour application à la télésurveillance routière par transmission vidéo à très bas débit." Pau, 2007. http://www.theses.fr/2007PAUU3003.
Full textThis thesis presents a feasibility study for transmitting images in GSM wireless networks (9600 bits/s maximal bit rate) with a frequency of one image per second, by using widespread image or video codecs. Due to this constraint, the maximal size of image data is 1. 2 KiB. In particular, the following aspects were studied. First, the type of data to be sent : video streaming or still image sequencing. We carried out several comparative tests, in the context of video surveillance, between video streaming with the MPEG-4 video coding standard (currently, the most widespread) and still image sequencing with JPEG2000 coding standard (currently, the best compression ratio). The second aspect is the maximal reduction of transmission data. Our approach is as follows: to divert the functionality of the JPEG2000's ROI feature at the start, in order to obtain a very high compression ratio—around 1:250. The image is divided in two areas: background and regions of interest (i. E. Mobile object areas). Only the image mobile object regions are compressed with the JPEG2000 ROI feature, implemented using the Maxshift technique. Our technique exploits this property by sending exclusively the ROI data. The last aspect refers to the reference image updating at the decoder. We propose an original technique to update by pieces. These pieces represent the relevant areas of the reference image. To achieve this, we employ the same mechanism for encoding mobile object regions. The definition of the updating strategy, considering the three following configurations: no mobile objects, few mobile objects or many mobile objects, defined accordingly to the rate of mobile pixels in the image
Ismaïl, Mohamed Amine. "Study and optimization of data protection, bandwidth usage and simulation tools for wireless networks." Nice, 2010. http://www.theses.fr/2010NICE4074.
Full textToday, many technical challenges remain in the design of wireless networks to support emerging services. The main contributions of this thesis are three-fold in addressing some of these issues. The first contribution addresses the reliability of wireless links, in particular through data protection against long fading time (also known as slow fading) in the context of a direct satellite-to-mobile link. We propose an innovative algorithm, called Multi Burst Sliding Encoding (MBSE), that extends the existing DVB-H intra-burst (MPEFEC) protection to an inter-burst protection. Our MBSE algorithm allows complete burst losses to be recovered, while taking into account the specificity of mobile hand-held devices. Based on an optimized data organization, our algorithm provides protection against long term fading, while still using the Reed-Solomon code already implemented in mobile hand-held chipsets. MBSE has been approved by the DVB Forum and was integrated in the DVB-SH standard in which it now plays a key role. The second contribution is related to the practical optimization of bandwidth usage in the context of wireless links. We have proposed WANcompress, a bandwidth compression technique for detecting and eliminating redundant network traffic by sending only a label instead of the original packets. It differs from standard compression techniques in that it removes redundant patterns over a large range of time (days/weeks, i. E. Giga-bytes) where as existing compression techniques operate on a smaller windwos scales (seconds, i. E. Few kilo-bytes). We performed intensive experiments that achieved compression factors up to 25 times, and acceleration factors up to 22 times. In a corporate trial conducted over a WiMAX network for one week, WANcompress improved the bitrate up to 10 times, and on average 33% of the bandwidth was saved. The third contribution is related to the simulation of wireless networks. We have proposed a 802. 16 WiMAX module for the widely used ns-3 simulator. Our module provides a detailed and standard-compliant implementation of the Point to Multi-Point (PMP) topology with Time Division Duplex (TDD) mode. It supports a large number of features, thus enabling the simulation of a rich set of WiMAX scenarios, and providing close-to-real results. These features include Quality of Service (QoS) management, efficient scheduling for both up-link and downlink, packet classification, bandwidth management, dynamic flow creation, as well as scalable OFDM physical layer simulation. This module was merged with the main development branch of the ns-3 simulator, and has become one of its standard features as of version v3. 8
Tizon, Nicolas. "Codage vidéo scalable pour le transport dans un réseau sans fil." Paris, ENST, 2009. http://www.theses.fr/2009ENST0032.
Full textBitrate adaptation is a key issue when considering streaming applications involving throughput limited networks with error prone channels, as wireless networks. The emergence of recent source coding standards like the scalable extension of H. 264/AVC namely Scalable Video Coding (SVC), that allows to encode in the same bitstream a wide range of spatio-temporal and quality layers, offers new adaptation facilities. The concept of scalability, when exploited for dynamic channel adaptation purposes, raises at least two kinds of issues: how to measure network conditions and how to differentiate transmitted data in terms of distortion contribution ? In this document, we propose and compare different approaches in terms of network architecture in order to comply with different practical requirements. The first approach consists in a video streaming system that uses SVC coding in order to adapt the input stream at the radio link layer as a function of the available bandwidth, thanks to a Media Aware Network Element (MANE) that assigns priority labels to video packets. The second approach consists in not modifying the existing network infrastructure and keeping the adaptation operations in the server that exploits long term feedbacks from the client. Moreover, in this document, we present a recursive distortion model, which is used to dynamically calculate the contribution of each packet to the final distortion. Finally, in the scope of lossy compression with subband decomposition and quantization, a contribution has been proposed in order to jointly resize decoded pictures and adapt the inverse transformation matrices following quantization noise and images content
Do, Quoc Bao. "Adaptive Post-processing Methods for Film and Video Quality Enhancement." Paris 13, 2011. http://www.theses.fr/2011PA132030.
Full textThe introduction of new digital processing and coding techniques of visual contents in the film industry has allowed filmmakers to achieve great technological and commercial advancements. Indeed, the automation of certain complex tasks has enabled to achieve productivity gains and has made advances in terms of reliability and technical accuracy. The picture quality is one of the most important factors in the #lm industry. The main objective of the thesis work is then to propose new methods for improving the quality of high de#nition video in the context of digital cinema. Here we focus on some known annoying artifacts and distortions. A new and less studied artifact occurring during the color processing of the film is also analyzed. All the proposed solutions are developed in a highly constrained environment dictated by the cinema post-production framework. The performances of the developed methods are evaluated using some objective measures and criteria. The obtained results show that the proposed methods can provide efficient solutions for improving HD film quality. Some perspectives for extending these solutions to other visual contents are considered
Boisson, Guillaume. "Représentations hautement scalables pour la compression vidéo sur une large gamme de débits / résolutions." Rennes 1, 2005. ftp://ftp.irisa.fr/techreports/theses/2005/boisson.pdf.
Full textDhif, Imen. "Compression, analyse et visualisation des signaux physiologiques (EEG) appliqués à la télémédecine." Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066393.
Full textDue to the large amount of EEG acquired over several days, an efficient compression technique is necessary. The lack of experts and the short duration of epileptic seizures require the automatic detection of these seizures. Furthermore, a uniform viewer is mandatory to ensure interoperability and a correct reading of transmitted EEG exams. The certified medical image WAAVES coder provides high compression ratios CR while ensuring image quality. During our thesis, three challenges are revealed : adapting WAAVES coder to the compression of the EEG signals, detecting automatically epileptic seizures in an EEG signal and ensure the interoperability of the displays of EEG exams. The study of WAAVES shows that this coder is unable to remove spatial correlation and to compress directly monodimensional signals. Therefore, we applied ICA to decorrelate signals, a scaling to resize decimal values, and image construction. To keep a diagnostic quality with a PDR less than 7%, we coded the residue. The proposed compression algorithm EEGWaaves has achieved CR equal to 56. Subsequently, we proposed a new method of EEG feature extraction based on a new calculation model of the energy expected measurement (EAM) of EEG signals. Then, statistical parameters were calculated and Neural Networks were applied to classify and detect epileptic seizures. Our method allowed to achieve a better sensitivity up to 100% and an accuracy of 99.44%. The last chapter details the deployment of our multiplatform display of physiological signals by meeting the specifications established by doctors. The main role of this software is to ensure the interoperability of EEG exams between healthcare centers
Tchiotsop, Daniel. "Modélisations polynomiales des signaux ECG : applications à la compression." Thesis, Vandoeuvre-les-Nancy, INPL, 2007. http://www.theses.fr/2007INPL088N/document.
Full textDeveloping new ECG data compression methods has become more important with the implementation of telemedicine. In fact, compression schemes could considerably reduce the cost of medical data transmission through modern telecommunication networks. Our aim in this thesis is to elaborate compression algorithms for ECG data, using orthogonal polynomials. To start, we studied ECG physiological origin, analysed this signal patterns, including characteristic waves and some signal processing procedures generally applied ECG. We also made an exhaustive review of ECG data compression algorithms, putting special emphasis on methods based on polynomial approximations or polynomials interpolations. We next dealt with the theory of orthogonal polynomials. We tackled on the mathematical construction and studied various and interesting properties of orthogonal polynomials. The modelling of ECG signals with orthogonal polynomials includes two stages: Firstly, ECG signal should be divided into blocks after QRS detection. These blocks must match with cardiac cycles. The second stage is the decomposition of blocks into polynomial bases. Decomposition let to coefficients which will be used to synthesize reconstructed signal. Compression is the fact of using a small number of coefficients to represent a block made of large number of signal samples. We realised ECG signals decompositions into some orthogonal polynomials bases: Laguerre polynomials and Hermite polynomials did not bring out good signal reconstruction. Interesting results were recorded with Legendre polynomials and Tchebychev polynomials. Consequently, our first algorithm for ECG data compression was designed using Jacobi polynomials. This algorithm could be optimized by suppression of boundary effects, it then becomes universal and could be used to compress other types of signal such as audio and image signals. Although Laguerre polynomials and Hermite functions could not individually let to good signal reconstruction, we imagined an association of both systems of functions to realize ECG compression. For that matter, every block of ECG signal that matches with a cardiac cycle is split in two parts. The first part consisting of the baseline section of ECG is decomposed in a series of Laguerre polynomials. The second part made of P-QRS-T waves is modelled with Hermite functions. This second algorithm for ECG data compression is robust and very competitive
Fuchs, Christine. "Etude de la compression de signaux par dispositifs à lignes de transmission non linéaire." Chambéry, 2000. http://www.theses.fr/2000CHAMS028.
Full textBernard, Antoine. "Solving interoperability and performance challenges over heterogeneous IoT networks : DNS-based solutions." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS012.
Full textThe Internet of Things (IoT) evolved from its theoretical possibility to connect anything and everything to an ever-increasing market of goods and services. Its underlying technologies diversified and IoT now encompasses various communication technologies ranging from short-range technologies as Bluetooth, medium-range technologies such as Zigbee and long-range technologies such as Long Range Wide Area Network.IoT systems are usually built around closed, siloed infrastructures. Developing interoperability between these closed silos is crucial for IoT use-cases such as Smart Cities. Working on this subject at the application level is a first step that directly evolved from current practice regarding data collection and analysis in the context of the development of Big Data. However, building bridges at the network level would enable easier interconnection between infrastructures and facilitate seamless transitions between IoT technologies to improve coverage at low cost.The Domain Name System (DNS) basically developed to translate human-friendly computer host-names on a network into their corresponding IP addresses is a known interoperability facilitator on the Internet. It is one of the oldest systems deployed on the Internet and was developed to support the Internet infrastructure's growth at the end of the 80s. Despite its old age, it remains a core service on the Internet and many changes from its initial specifications are still in progress, as proven by the increasing number of new suggestions to modify its standard.DNS relies on simple principles, but its evolution since its first developments allowed to build complex systems using its many configuration possibilities. This thesis investigates possible improvements to IoT services and infrastructures. Our key problem can be formulated as follow: Can the DNS and its infrastructure serve as a good baseline to support IoT evolution as it accompanied the evolution of the Internet?We address this question with three approaches. We begin by experimenting with a federated roaming model IoT networks exploiting the strengths of the DNS infrastructure and its security extensions to improve interoperability, end-to-end security and optimize back-end communications. Its goal is to propose seamless transitions between networks based on information stored on the DNS infrastructure. We explore the issues behind DNS and application response times, and how to limit its impact on constrained exchanges between end devices and radio gateways studying DNS prefetching scenarios in a city mobility context. Our second subject of interest consists of studying how DNS can be used to develop availability, interoperability and scalability in compression protocols for IoT. Furthermore, we experimented around compression paradigms and traffic minimization by implementing machine learning algorithms onto sensors and monitoring important system parameters, particularly transmission performance and energy efficiency
Fila-Kordy, Barbara. "Automates pour l'analyse de documents XML compressés, applications à la sécurité d'accès." Orléans, 2008. http://www.theses.fr/2008ORLE2029.
Full textToubol, Dominique. "Contribution à l'étude de codeurs prédictifs adaptatifs avec quantificateur vectoriel : codeur pleine bande et codeur en bande de base." Nice, 1990. http://www.theses.fr/1990NICE4371.
Full textMejri, Asma. "Systèmes de communications multi-utilisateurs : de la gestion d'interférence au codage réseau." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0086.
Full textThis work is dedicated to the analysis, design and performance evaluation of physical layer network coding strategies in multiuser communication systems. The first part is devoted to study the compute-and-forward protocol in the basic multiple access channel. For this strategy, we propose an optimal solution to design efficient network codes based on solving a lattice shortest vector problem. Moreover, we derive novel bounds on the ergodic rate and the outage probability for the CF operating in fast and slow fading channels respectively. Besides, we develop novel decoding algorithms proved numerically to outperform the traditional decoding scheme for the CF. The second part is dedicated to the design and end-to-end performance evaluation of network codes for the CF and the analog network coding in the two-way relay channel and the multi-source multi-relay channel. For each network model we study the decoding at the relay nodes and the end destination, propose search algorithms for optimal network codes for the CF and evaluate, theoretically and numerically, the end-to-end error rate and achievable transmission rate. In the last part, we study new decoders for the distributed MIMO channel termed integer forcing (if). Inspired by the CF, if receivers take advantage of the interference provided by the wireless medium to decode integer linear combinations of the original codewords. We develop in our work efficient algorithms to select optimal if receivers parameters allowing to outperform existing suboptimal linear receivers
Vo, Nguyen Dang Khoa. "Compression vidéo basée sur l'exploitation d'un décodeur intelligent." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4136/document.
Full textThis Ph.D. thesis studies the novel concept of Smart Decoder (SDec) where the decoder is given the ability to simulate the encoder and is able to conduct the R-D competition similarly as in the encoder. The proposed technique aims to reduce the signaling of competing coding modes and parameters. The general SDec coding scheme and several practical applications are proposed, followed by a long-term approach exploiting machine learning concept in video coding. The SDec coding scheme exploits a complex decoder able to reproduce the choice of the encoder based on causal references, eliminating thus the need to signal coding modes and associated parameters. Several practical applications of the general outline of the SDec scheme are tested, using different coding modes during the competition on the reference blocs. Despite the choice for the SDec reference block being still simple and limited, interesting gains are observed. The long-term research presents an innovative method that further makes use of the processing capacity of the decoder. Machine learning techniques are exploited in video coding with the purpose of reducing the signaling overhead. Practical applications are given, using a classifier based on support vector machine to predict coding modes of a block. The block classification uses causal descriptors which consist of different types of histograms. Significant bit rate savings are obtained, which confirms the potential of the approach
Hénocq, Xavier. "Contrôle d'erreur pour transmission de flux vidéo temps réel sur réseaux de paquets hétérogènes et variant dans le temps." Rennes 1, 2002. http://www.theses.fr/2002REN10020.
Full textBuisson, Alexandre. "Implémentation efficace d'un codeur vidéo hiérarchique granulaire sur une architecture à processeurs multimedia." Rennes 1, 2002. http://www.theses.fr/2002REN10083.
Full textGuyader, Arnaud. "Contribution aux algorithmes de décodage pour les codes graphiques." Rennes 1, 2002. http://www.theses.fr/2002REN10014.
Full textJerbi, Khaled. "Synthese matérielle haut niveau des programmes flot de données." Rennes, INSA, 2012. https://tel.archives-ouvertes.fr/tel-00827163.
Full textThe evolution of video processing algorithms involved the advent of several standards. These standards present many common algorithms but designers are not able to reuse them because of their monolithic description. To solve this problem, iso/iec mpeg committee created the reconfigurable video coding (rvc) standard based on the idea that processing algorithms can be defined as a library of components that can be updated separately. Thus, these components of the modular library are standardized instead of the whole decoder. Mpeg rvc framework aims at providing a unified high-level specification of current mpeg coding technologies using a dataflow language called cal actor language (cal). Rvc presents also a compilation framework of the cal for hardware and software targets, but hardware compilers cannot compile high-level features which are omnipresent in most advanced designs. In this thesis, the cal language is used to develop a baseline of the lar still image coder. The problem of hardware generation is later resolved using automatic transformations of the high-level features into their equivalent low-level ones. These transformations are validated using different designs
Pereira, Roger. "Conception d’une cellule déphaseuse active : bipolarisation pour réseaux réflecteurs en bande X." Rennes 1, 2011. http://www.theses.fr/2011REN1S029.
Full textIn this report the electrically steerable beam antennas based on the concept of reflectarray antennas are studied. A new design using an active dual-polarisation unit-cell for reflectarray applications is investigated. The unit-cell is made of two cross-dipoles orthogonally placed into a metallic cavity (waveguide). This cavity is closed by a short-circuit. To maintain a low level of cross-polarization, it is shown that, the symmetry of the TE10 and TE01 modes have to be respected geometrically and electrically. Thus, the command for the active elements should also be symmetric. This reduces the total number of possible realizable states for the cell. As a first step, the concept was validated for the passive cells in the X frequency band. The experimental results obtained for these cells permitted us to take the study a step further towards the active cell configurations. For the simple reason of the availability of a mature technology, the p. I. N diodes were selected to be used as the switching devices. The dispersive nature of the p. I. N diodes creates a new type of dissymmetry problem. However, this kind of dissymmetry is more critical as compared to the one introduced by the geometric errors during fabrication. Nevertheless, a solution to reduce (and nearly eliminate) this undesirable effect is presented and validated experimentally
Akbari, Ali. "Modélisation parcimonieuse des signaux : application a la compression d'image, compensation d'erreurs et à l'acquisition comprimée." Electronic Thesis or Diss., Sorbonne université, 2018. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2018SORUS461.pdf.
Full textSignal models are a cornerstone of contemporary signal and image processing methodology. In this report, two particular signal modeling methods, called analysis and synthesis sparse representation, are studied which have been proven to be effective for many signals, such as natural images, and successfully used in a wide range of applications. Both models represent signals in terms of linear combinations of an underlying set, called dictionary, of elementary signals known as atoms. The driving force behind both models is sparsity of the representation coefficients, i.e. the rapid decay of the representation coefficients over the dictionary. On the other hands, the dictionary choice determines the success of the entire model. According to these two signal models, there have been two main disciplines of dictionary designing; harmonic analysis approach and machine learning methodology. The former leads to designing the dictionaries with easy and fast implementation, while the latter provides a simple and expressive structure for designing adaptable and efficient dictionaries. The main goal of this thesis is to provide new applications to these signal modeling methods by addressing several problems from various perspectives. It begins with the direct application of the sparse representation, i.e. image compression. The line of research followed in this area is the synthesis-based sparse representation approach in the sense that the dictionary is not fixed and predefined, but learned from training data and adapted to data, yielding a more compact representation. A new Image codec based on adaptive sparse representation over a trained dictionary is proposed, wherein different sparsity levels are assigned to the image patches belonging to the salient regions, being more conspicuous to the human visual system. Experimental results show that the proposed method outperforms the existing image coding standards, such as JPEG and JPEG2000, which use an analytic dictionary, as well as the state-of-the-art codecs based on the trained dictionaries. In the next part of thesis, it focuses on another important application of the sparse signal modeling, i.e. solving inverse problems, especially for error concealment (EC), wherein a corrupted image is reconstructed from the incomplete data, and Compressed Sensing recover, where an image is reconstructed from a limited number of random measurements. Signal modeling is usually used as a prior knowledge about the signal to solve these NP-hard problems. In this thesis, inspired by the analysis and synthesis sparse models, these challenges are transferred into two distinct sparse recovery frameworks and several recovery methods are proposed. Compared with the state-of-the-art EC and CS algorithms, experimental results show that the proposed methods show better reconstruction performance in terms of objective and subjective evaluations. This thesis is finalized by giving some conclusions and introducing some lines for future works
Abdellatif, Slim. "Contribution à la modélisation et à l'analyse de la qualité de service dans les réseaux à commutation de paquets." Toulouse 3, 2002. http://www.theses.fr/2002TOU30041.
Full textSidaty, Naty. "Exploitation de la multimodalité pour l'analyse de la saillance et l'évaluation de la qualité audiovisuelle." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2309/document.
Full textAudiovisual information are part of our daily life either for professional needs or simply for leisure purposes. The plethoric quantity of data requires the use of compression for both storage and transmission, which may alter the audiovisual quality if it does not account for perceptual aspects. The literature on saliency and quality is very rich, often ignoring the audio component playing an important role in the visual scanpath and the quality of experience. This thesis aims at contributing in overing the lack of multimodal approaches, by following a dedicated experimental procedures. The proposed work is twofold: visual attention modelling and multimodal quality evaluation. First, in order to better understand and analyze the influence of audio on humain ocular movements, we run several eyetracking experiments involving a panel of observers and exploiting a video dataset constructed for our context. The importance of faces has been confirmed, particularly for talking faces having an increased saliency. Following these results, we proposed an audiovisual saliency model based on locutors detection in video and relying on spatial and temporal low-level features. Afterward, the influence of audio on multi-modal and multi-devices quality has been studied. To this end, psychovisual experiments have been conducted with the aim to quantify the multimodal quality in the context of video streaming applications where various display devices could be used
Canourgues, Lucile. "Algorithmes de routage dans les réseaux mobile ad hoc tactique à grande échelle." Toulouse, INPT, 2008. http://ethesis.inp-toulouse.fr/archive/00000595/.
Full textThe current Transformation of the military networks adopts the MANET as a main component of the tactical domain. Indeed, a MANET is the right solution to enable highly mobile, highly reactive and quickly deployable tactical networks. Many applications such as the Situational Awareness rely on group communications, underlying the need for a multicast service with the tactical environment where the MANET is employed as a transit network. The purpose of this thesis is to study the setting up of an optimal multicast service within this tactical environment. We firstly focus on defining the protocol architecture to carry out within the tactical network paying particular attention to the MANET. This network is interconnected within different types of networks based on IP technologies and implementing potentially heterogeneous multicast protocols. The tactical MANET is supposed to be made of several hundred of mobile nodes, which implies that the scalability is crucial in the multicast protocol architecture choice. Since the concept of clustering proposes interesting scalability features, we consider that the MANET is a clustered network. Thereby, we define two multicast routing protocols adapted to the MANET: firstly STAMP that is in charge of the multicast communications within each cluster and secondly SAFIR that handles multicast flows between the clusters. These two protocols that can be implemented independently, act in concert to provide an efficient and scalable multicast service for the tactical MANET
Liu, Yi. "Codage d'images avec et sans pertes à basse complexité et basé contenu." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0028/document.
Full textThis doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding
Urban, Fabrice. "Implantation optimisée d'estimateurs de mouvement pour la compression vidéo sur plates-formes hétérogènes multicomposants." Phd thesis, INSA de Rennes, 2007. http://tel.archives-ouvertes.fr/tel-00266979.
Full textUn état de l'art des différentes méthodes d'estimation de mouvement et des architectures matérielles existantes est tout d'abord présenté. Les algorithmes de mise en correspondance de blocs HME et EPZS apparaissent comme les plus performants pour notre étude. La méthode de développement utilisée, ainsi que l'implantation et l'optimisation d'estimateurs de mouvement sur DSP sont ensuite présentés. Un nouvel algorithme d'estimation de mouvement est conçu : HDS. Des implantations parallèles sur plates-formes hétérogènes sont enfin proposées.
Babel, Marie. "Compression d'images avec et sans perte par la méthode LAR (Locally Adaptive Resolution)." Phd thesis, INSA de Rennes, 2005. http://tel.archives-ouvertes.fr/tel-00131758.
Full textaux erreurs.
La méthode LAR (Locally Adaptive Resolution) de base a été élaborée à des fins de compression avec pertes à bas-débits. Par l'exploitation des propriétés intrinsèques du LAR, la définition d'une représentation en régions auto-extractibles apporte une solution de codage efficace à la fois en termes de débit et en termes de qualité d'images reconstruites. Le codage à débit localement variable est facilité par l'introduction de la notion de région d'intérêt ou encore de VOP (Video Object Plane).
L'obtention d'un schéma de compression sans perte s'est effectuée conjointement à l'intégration de la notion de scalabilité, par l'intermédiaire de méthodes pyramidales. Associés à une phase de prédiction, trois codeurs différents répondant à ces exigences ont vu le jour : le LAR-APP, l'Interleaved S+P et le RWHT+P. Le LAR-APP (Approche Pyramidale Prédictive) se fonde sur l'exploitation d'un contexte de prédiction enrichi obtenu par un parcours original des niveaux de la pyramide construite. L'entropie des erreurs d'estimation résultantes (estimation réalisée dans le domaine spatial) s'avère ainsi réduite. Par la définition d'une solution opérant dans le domaine transformé, il nous a été possible d'améliorer plus encore les performances
entropiques du codeur scalable sans perte. L'Interleaved S+P se construit ainsi par l'entrelacement de deux pyramides de coefficients transformés. Quant à la méthode RWHT+P, elle s'appuie sur une forme nouvelle de la transformée Walsh-Hadamard bidimensionnelle. Les performances en termes d'entropie brute se révèlent bien supérieures à celles de l'état-de-l'art : des résultats tout à fait remarquables sont obtenus notamment sur les
images médicales.
Par ailleurs, dans un contexte de télémédecine, par l'association des méthodes pyramidales du LAR et de la transformée Mojette, un codage conjoint source-canal efficace, destiné à la transmission sécurisée des images médicales compressées sur des réseaux bas-débits, a été défini. Cette technique offre une protection différenciée intégrant la nature hiérarchique des flux issus des méthodes multirésolution du LAR pour une qualité de service exécutée de bout-en-bout.
Un autre travail de recherche abordé dans ce mémoire vise à l'implantation automatique des codeurs LAR sur des architectures parallèles hétérogènes multi-composants. Par le biais de la description des algorithmes sous le logiciel SynDEx, il nous a été possible en particulier de réaliser le prototypage de
l'Interleaved S+P sur des plate-formes multi-DSP et multi-PC.
Enfin, l'extension du LAR à la vidéo fait ici l'objet d'un travail essentiellement prospectif. Trois techniques différentes sont proposées, s'appuyant sur un élément commun : l'exploitation de la représentation en régions précédemment évoquée.
Duverdier, Alban. "Cyclostationnarité et changements d'horloge périodiques." Toulouse, INPT, 1997. http://www.theses.fr/1997INPT136H.
Full textNouri, Nedia. "Évaluation de la qualité et transmission en temps-réel de vidéos médicales compressées : application à la télé-chirurgie robotisée." Electronic Thesis or Diss., Vandoeuvre-les-Nancy, INPL, 2011. http://www.theses.fr/2011INPL049N.
Full textThe digital revolution in medical environment speeds up development of remote Robotic-Assisted Surgery and consequently the transmission of medical numerical data such as pictures or videos becomes possible. However, medical video transmission requires significant bandwidth and high compression ratios, only accessible with lossy compression. Therefore research effort has been focussed on video compression algorithms such as MPEG2 and H.264. In this work, we are interested in the question of compression thresholds and associated bitrates are coherent with the acceptance level of the quality in the field of medical video. To evaluate compressed medical video quality, we performed a subjective assessment test with a panel of human observers using a DSCQS (Double-Stimuli Continuous Quality Scale) protocol derived from the ITU-R BT-500-11 recommendations. Promising results estimate that 3 Mbits/s could be sufficient (compression ratio aroundthreshold compression level around 90:1 compared to the original 270 Mbits/s) as far as perceived quality is concerned. Otherwise, determining a tolerance to lossy compression has allowed implementation of a platform for real-time transmission over an IP network for surgical videos compressed with the H.264 standard from the University Hospital of Nancy and the school of surgery
Cerra, Daniele. "Contribution à la théorie algorithmique de la complexité : méthodes pour la reconnaissance de formes et la recherche d'information basées sur la compression des données." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00562101.
Full textCochachin, Henostroza Franklin Rafael. "Noise-against-Noise Decoders : Low Precision Iterative Decoders." Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS527.
Full textIn this thesis, two improved decoders are defined using quantized input channel with only 3 or 4 bits of precision for low-density parity-check (LDPC) codes. Also, a post-processing algorithm for low precision iterative decoders is proposed. One of the proposed decoders, named Noise- Against-Noise Min-Sum (NAN-MS) decoder, incorporates a certain amount of random perturbation due to deliberate noise injection. The other of the proposed decoders, named Sign- Preserving Min-Sum (SP-MS) decoder, always preserve the sign of the messages and it uses all the possible combinations that can be generated for a given precision. Also, the SP-MS decoder can reduce the precision of its messages by one bit maintaining the same error correcting performance. The NAN-MS decoder and the SP-MS decoder present a SNR gain up to 0.43 dB the waterfall region of the performance curve. On the other hand, the proposed post-processing algorithm is very efficient and easily adaptable in low precision decoders. For the IEEE ETHERNET code, the post-processing algorithm implemented in a very low precision SP-MS decoder helps to lower the error floor below a FER of 10-10. On an ASIC of 28 nm of technology, the implementation results of a fully parallel architecture produces an area consumed by the decoder of 1.76 mm2, a decoding throughput of 319.34 Gbit/s, and a hardware efficiency of 181.44 Gbit/s/mm2
Mejri, Asma. "Systèmes de communications multi-utilisateurs : de la gestion d'interférence au codage réseau." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0086/document.
Full textThis work is dedicated to the analysis, design and performance evaluation of physical layer network coding strategies in multiuser communication systems. The first part is devoted to study the compute-and-forward protocol in the basic multiple access channel. For this strategy, we propose an optimal solution to design efficient network codes based on solving a lattice shortest vector problem. Moreover, we derive novel bounds on the ergodic rate and the outage probability for the CF operating in fast and slow fading channels respectively. Besides, we develop novel decoding algorithms proved numerically to outperform the traditional decoding scheme for the CF. The second part is dedicated to the design and end-to-end performance evaluation of network codes for the CF and the analog network coding in the two-way relay channel and the multi-source multi-relay channel. For each network model we study the decoding at the relay nodes and the end destination, propose search algorithms for optimal network codes for the CF and evaluate, theoretically and numerically, the end-to-end error rate and achievable transmission rate. In the last part, we study new decoders for the distributed MIMO channel termed integer forcing (if). Inspired by the CF, if receivers take advantage of the interference provided by the wireless medium to decode integer linear combinations of the original codewords. We develop in our work efficient algorithms to select optimal if receivers parameters allowing to outperform existing suboptimal linear receivers
Urvoy, Matthieu. "Les tubes de mouvement : nouvelle représentation pour les séquences d'images." Phd thesis, INSA de Rennes, 2011. http://tel.archives-ouvertes.fr/tel-00642973.
Full textGreco, Claudio. "Diffusion robuste de la vidéo en temps réel sur réseau sans fil." Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0032.
Full textDuring the last decade, real-time video streaming over ad-hoc networks has gathered an increasing interest, because of the attractive property of being able to deploy a visual communication system anytime and anywhere, without the need for a pre-existing infrastructure. A wide range of target applications, from military and rescue operations, to business, educational, and recreational scenarios, has been envisaged, which has created great expectations with respect to the involved technologies. The goal of this thesis is to provide an efficient and robust real-time video streaming system over mobile ad-hoc networks, proposing cross-layer solutions that overcome the limitations of both the application and network solutions available at this time. Our contributions cover several aspects of the mobile video streaming paradigm: a new multiple description video coding technique, which provides an acceptable video quality even in presence of high loss rates; a novel cross-layer design for an overlay creation and maintenance protocol, which, with a low overhead, distributedly manages a set of multicast trees, one for each description of the stream; an original distributed congestion-distortion optimisation framework, which, through a compact representation of the topology information, enables the nodes to learn the structure of the overlay and optimise their behaviour accordingly; and, finally, an integration with the emerging network coding paradigm