To see the other types of publications on this topic, follow the link: Lossless and Lossy compression.

Dissertations / Theses on the topic 'Lossless and Lossy compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Lossless and Lossy compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hernandez-Cabronero, Miguel, Ian Blanes, Armando J. Pinho, Michael W. Marcellin, and Joan Serra-Sagrista. "Progressive Lossy-to-Lossless Compression of DNA Microarray Images." IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2016. http://hdl.handle.net/10150/615540.

Full text
Abstract:
The analysis techniques applied to DNA microarray images are under active development. As new techniques become available, it will be useful to apply them to existing microarray images to obtain more accurate results. The compression of these images can be a useful tool to alleviate the costs associated to their storage and transmission. The recently proposed Relative Quantizer (RQ) coder provides the most competitive lossy compression ratios while introducing only acceptable changes in the images. However, images compressed with the RQ coder can only be reconstructed with a limited quality, determined before compression. In this work, a progressive lossy-to-lossless scheme is presented to solve this problem. First, the regular structure of the RQ intervals is exploited to define a lossy-to-lossless coding algorithm called the Progressive RQ (PRQ) coder. Second, an enhanced version that prioritizes a region of interest, called the PRQ-region of interest (ROI) coder, is described. Experiments indicate that the PRQ coder offers progressivity with lossless and lossy coding performance almost identical to the best techniques in the literature, none of which is progressive. In turn, the PRQ-ROI exhibits very similar lossless coding results with better rate-distortion performance than both the RQ and PRQ coders.
APA, Harvard, Vancouver, ISO, and other styles
2

Kodukulla, Surya Teja. "Lossless Image compression using MATLAB : Comparative Study." Thesis, Blekinge Tekniska Högskola, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20038.

Full text
Abstract:
Context: Image compression is one of the key and important applicationsin commercial, research, defence and medical fields. The largerimage files cannot be processed or stored quickly and efficiently. Hencecompressing images while maintaining the maximum quality possibleis very important for real-world applications. Objectives: Lossy compression is widely popular for image compressionand used in commercial applications. In order to perform efficientwork related to images, the quality in many situations needs to be highwhile having a comparatively low file size. Hence lossless compressionalgorithms are used in this study to compare the lossless algorithmsand to check which algorithm makes the compression retaining thequality with decent compression ratio. Method: The lossless algorithms compared are LZW, RLE, Huffman,DCT in lossless mode, DWT. The compression techniques areimplemented in MATLAB by using image processing toolbox. Thecompressed images are compared for subjective image quality. The imagesare compressed with emphasis on maintaining the quality ratherthan focusing on diminishing file size. Result: The LZW algorithm compression produces binary imagesfailing in this implementation to produce a lossless image. Huffmanand RLE algorithms produce similar results with compression ratiosin the range of 2.5 to 3.7, and the algorithms are based on redundancyreduction. The DCT and DWT algorithms compress every elementin the matrix defined for the images maintaining lossless quality withcompression ratios in the range 2 to 3.5. Conclusion: The DWT algorithm is best suitable for a more efficientway to compress an image in a lossless technique. As the wavelets areused in this compression, all the elements in the image are compressedwhile retaining the quality. The Huffman and RLE produce losslessimages, but for a large variety of images, some of the images may notbe compressed with complete efficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Abbott, Walter D. "A simple, low overhead data compression algorithm for converting lossy processes to lossless." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA277905.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, December 1993.<br>Thesis advisor(s): Ron J. Pieper. "December 1993." Cover title: A simple, ... lossy compression processes ... Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
4

Wilhelmy, Jochen [Verfasser], and Willi A. [Akademischer Betreuer] Kalender. "Lossless and Lossy Raw Data Compression in CT Imaging / Jochen Wilhelmy. Betreuer: Willi A. Kalender." Erlangen : Universitätsbibliothek der Universität Erlangen-Nürnberg, 2012. http://d-nb.info/1029374414/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Yi. "Codage d'images avec et sans pertes à basse complexité et basé contenu." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0028/document.

Full text
Abstract:
Ce projet de recherche doctoral vise à proposer solution améliorée du codec de codage d’images LAR (Locally Adaptive Resolution), à la fois d’un point de vue performances de compression et complexité. Plusieurs standards de compression d’images ont été proposés par le passé et mis à profit dans de nombreuses applications multimédia, mais la recherche continue dans ce domaine afin d’offrir de plus grande qualité de codage et/ou de plus faibles complexité de traitements. JPEG fut standardisé il y a vingt ans, et il continue pourtant à être le format de compression le plus utilisé actuellement. Bien qu’avec de meilleures performances de compression, l’utilisation de JPEG 2000 reste limitée due à sa complexité plus importe comparée à JPEG. En 2008, le comité de standardisation JPEG a lancé un appel à proposition appelé AIC (Advanced Image Coding). L’objectif était de pouvoir standardiser de nouvelles technologies allant au-delà des standards existants. Le codec LAR fut alors proposé comme réponse à cet appel. Le système LAR tend à associer une efficacité de compression et une représentation basée contenu. Il supporte le codage avec et sans pertes avec la même structure. Cependant, au début de cette étude, le codec LAR ne mettait pas en oeuvre de techniques d’optimisation débit/distorsions (RDO), ce qui lui fut préjudiciable lors de la phase d’évaluation d’AIC. Ainsi dans ce travail, il s’agit dans un premier temps de caractériser l’impact des principaux paramètres du codec sur l’efficacité de compression, sur la caractérisation des relations existantes entre efficacité de codage, puis de construire des modèles RDO pour la configuration des paramètres afin d’obtenir une efficacité de codage proche de l’optimal. De plus, basée sur ces modèles RDO, une méthode de « contrôle de qualité » est introduite qui permet de coder une image à une cible MSE/PSNR donnée. La précision de la technique proposée, estimée par le rapport entre la variance de l’erreur et la consigne, est d’environ 10%. En supplément, la mesure de qualité subjective est prise en considération et les modèles RDO sont appliqués localement dans l’image et non plus globalement. La qualité perceptuelle est visiblement améliorée, avec un gain significatif mesuré par la métrique de qualité objective SSIM. Avec un double objectif d’efficacité de codage et de basse complexité, un nouveau schéma de codage LAR est également proposé dans le mode sans perte. Dans ce contexte, toutes les étapes de codage sont modifiées pour un meilleur taux de compression final. Un nouveau module de classification est également introduit pour diminuer l’entropie des erreurs de prédiction. Les expérimentations montrent que ce codec sans perte atteint des taux de compression équivalents à ceux de JPEG 2000, tout en économisant 76% du temps de codage et de décodage<br>This doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding
APA, Harvard, Vancouver, ISO, and other styles
6

Had, Filip. "Komprese signálů EKG nasnímaných pomocí mobilního zařízení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316832.

Full text
Abstract:
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.
APA, Harvard, Vancouver, ISO, and other styles
7

Lúdik, Michal. "Porovnání hlasových a audio kodeků." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219793.

Full text
Abstract:
This thesis deals with description of human hearing, audio and speech codecs, description of objective measure of quality and practical comparison of codecs. Chapter about audio codecs consists of description of lossless codec FLAC and lossy codecs MP3 and Ogg Vorbis. In chapter about speech codecs is description of linear predictive coding and G.729 and OPUS codecs. Evaluation of quality consists of description of segmental signal-to- noise ratio and perceptual evaluation of quality – WSS and PESQ. Last chapter deals with description od practical part of this thesis, that is comparison of memory and time consumption of audio codecs and perceptual evaluation of speech codecs quality.
APA, Harvard, Vancouver, ISO, and other styles
8

Kasaei, Shohreh. "Fingerprint analysis using wavelet transform with application to compression and feature extraction." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36053/7/36053_Digitised_Thesis.pdf.

Full text
Abstract:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
APA, Harvard, Vancouver, ISO, and other styles
9

Zheng, L. "Lossy index compression." Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1302556/.

Full text
Abstract:
This thesis primarily investigates lossy compression of an inverted index. Two approaches of lossy compression are studied in detail, i.e. (i) term frequency quantization, and (ii) document pruning. In addition, a technique for document pruning, i.e. the entropy-based method, is applied to re-rank retrieved documents as query-independent knowledge. Based on the quantization theory, we examine how the number of quantization levels for coding the term frequencies affects retrieval performance. Three methods are then proposed for the purpose of reducing the quantization distortion, including (i) a non-uniform quantizer; (ii) an iterative technique; and (iii) term-specific quantizers. Experiments based on standard TREC test sets demonstrate that nearly no degradation of retrieval performance can be achieved by allocating only 2 or 3 bits for the quantized term frequency values. This is comparable to lossless coding techniques such as unary, γ and δ-codes. Furthermore, if lossless coding is applied to the quantized term frequency values, then around 26% (or 12%) savings can be achieved over lossless coding alone, with less than 2.5% (or no measurable) degradation in retrieval performance. Prior work on index pruning considered posting pruning and term pruning. In this thesis, an alternative pruning approach, i.e. document pruning, is investigated, in which unimportant documents are removed from the document collection. Four algorithms for scoring document importance are described, two of which are dependent on the score function of the retrieval system, while the other two are independent of the retrieval system. Experimental results suggest that document pruning is comparable to existing pruning approaches, such as posting pruning. Note that document pruning affects the global statistics of the indexed collection. We therefore examine whether retrieval performance is superior based on statistics derived from the full or the pruned collection. Our results indicate that keeping statistics derived from the full collection performs slightly better. Document pruning scores documents and then discards those that fall outside a threshold. An alternative is to re-rank documents based on these scores. The entropy-based score, which is independent of the retrieval system, provides a query-independent knowledge of document specificity, analogous to PageRank. We investigate the utility of document specificity in the context of Intranet search, where hypertext information is sparse or absent. Our results are comparable to the previous algorithm that induced a graph link structure based on the measure of similarity between documents. However, a further analysis indicates that our method is superior on computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
10

Hansson, Erik, and Stefan Karlsson. "Lossless Message Compression." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-21434.

Full text
Abstract:
In this thesis we investigated whether using compression when sending inter-process communication (IPC) messages can be beneficial or not. A literature study on lossless compression resulted in a compilation of algorithms and techniques. Using this compilation, the algorithms LZO, LZFX, LZW, LZMA, bzip2 and LZ4 were selected to be integrated into LINX as an extra layer to support lossless message compression. The testing involved sending messages with real telecom data between two nodes on a dedicated network, with different network configurations and message sizes. To calculate the effective throughput for each algorithm, the round-trip time was measured. We concluded that the fastest algorithms, i.e. LZ4, LZO and LZFX, were most efficient in our tests.<br>I detta examensarbete har vi undersökt huruvida komprimering av meddelanden för interprocesskommunikation (IPC) kan vara fördelaktigt. En litteraturstudie om förlustfri komprimering resulterade i en sammanställning av algoritmer och tekniker. Från den här sammanställningen utsågs algoritmerna LZO, LZFX, LZW, LZMA, bzip2 och LZ4 för integrering i LINX som ett extra lager för att stödja komprimering av meddelanden. Algoritmerna testades genom att skicka meddelanden innehållande riktig telekom-data mellan två noder på ett dedikerat nätverk. Detta gjordes med olika nätverksinställningar samt storlekar på meddelandena. Den effektiva nätverksgenomströmningen räknades ut för varje algoritm genom att mäta omloppstiden. Resultatet visade att de snabbaste algoritmerna, alltså LZ4, LZO och LZFX, var effektivast i våra tester.
APA, Harvard, Vancouver, ISO, and other styles
11

Steinruecken, Christian. "Lossless data compression." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Cambiasso, Javier. "Light-matter interactions in lossy and lossless media." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/53284.

Full text
Abstract:
Light-matter interactions lie at the core of modern technologies. Different decay channels get activated depending on the structure of matter used and on the properties of the exciting light. In particular, nano-antennas are used as the quintessential object to redistribute the energy of single-photon emitters in the nano-scale or of free-electrons oscillations in metals. Two different kind of nano-antennas are studied: metallic (lossy) and dielectric (lossless). Metallic nano-antennas are shown to be applicable to technologies benefiting from enhancement of both radiative and non-radiative properties. In stark contrast, certain dielectric nano-antennas are essentially lossless in the visible regime, which benefits their coupling to far-field modes. The metallic nano-antennas used in this work are either asymmetric nano-cavities or the ubiquitous bow-tie antenna. With the former we show enhancement of the radiative rate of single photon emitters located in the neighbourhood of the strongly modified electromagnetic environment. Here, two competing processes collude to either enhance or quench the coupling to the far-field. As will be shown via simulations, these two scenarios are strongly wavelength dependent and two very differentiated regions can be recognised where one overwhelms the other. Contrariwise, the bow-tie antennas are used to enhance the opposite effect: non-radiative channels exclusively. Here it is experimentally demonstrated that surface plasmon polaritons excited in the nano-structure can decay into hot carriers, instead of far-field radiation. A sub-diffraction mapping of the rate of hot electron generation is traced by depositing nano-particles and analysing many scanning electron micrographs. In order to show that nano-photonics also has the potential to get rid of losses, we investigated the possibility of applying gallium phosphide nano-antennas as efficient far-field out-couplers. A comparison between metallic and dielectric nano-antennas is carried out using a unified theory and finally experimental results corroborate the large enhancement predicted by the theory of the fluorescence rate of single-photon emitters located around the scatterers.
APA, Harvard, Vancouver, ISO, and other styles
13

Penrose, Andrew John. "Extending lossless image compression." Thesis, University of Cambridge, 1999. https://www.repository.cam.ac.uk/handle/1810/272288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Contino, Sergio. "Development of Software Tools for the Test of Ultra Wide Band Receivers." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/4327/.

Full text
Abstract:
In the last years, the importance of locating people and objects and communicating with them in real time has become a common occurrence in every day life. Nowadays, the state of the art of location systems for indoor environments has not a dominant technology as instead occurs in location systems for outdoor environments, where GPS is the dominant technology. In fact, each location technology for indoor environments presents a set of features that do not allow their use in the overall application scenarios, but due its characteristics, it can well coexist with other similar technologies, without being dominant and more adopted than the others indoor location systems. In this context, the European project SELECT studies the opportunity of collecting all these different features in an innovative system which can be used in a large number of application scenarios. The goal of this project is to realize a wireless system, where a network of fixed readers able to query one or more tags attached to objects to be located. The SELECT consortium is composed of European institutions and companies, including Datalogic S.p.A. and CNIT, which deal with software and firmware development of the baseband receiving section of the readers, whose function is to acquire and process the information received from generic tagged objects. Since the SELECT project has an highly innovative content, one of the key stages of the system design is represented by the debug phase. This work aims to study and develop tools and techniques that allow to perform the debug phase of the firmware of the baseband receiving section of the readers.
APA, Harvard, Vancouver, ISO, and other styles
15

Bejile, Brian. "Bi-level lossless compression techniques." Diss., Connect to the thesis, 2004. http://hdl.handle.net/10066/1481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Barr, Kenneth C. (Kenneth Charles) 1978. "Energy aware lossless data compression." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Açikel, Ömer Fatih, and William E. Ryan. "Lossless Compression of Telemetry Data." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611434.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California<br>Sandia National Laboratories is faced with the problem of losslessly compressing digitized data produced by various measurement transducers. Their interest is in compressing the data as it is created, i.e., in real time. In this work we examine a number of lossless compression schemes with an eye toward their compression efficiencies and compression speeds. The various algorithms are applied to data files supplied by Sandia containing actual vibration data.
APA, Harvard, Vancouver, ISO, and other styles
18

Syahrul, Elfitrin. "Lossless and nearly-lossless image compression based on combinatorial transforms." Phd thesis, Université de Bourgogne, 2011. http://tel.archives-ouvertes.fr/tel-00750879.

Full text
Abstract:
Common image compression standards are usually based on frequency transform such as Discrete Cosine Transform or Wavelets. We present a different approach for loss-less image compression, it is based on combinatorial transform. The main transform is Burrows Wheeler Transform (BWT) which tends to reorder symbols according to their following context. It becomes a promising compression approach based on contextmodelling. BWT was initially applied for text compression software such as BZIP2 ; nevertheless it has been recently applied to the image compression field. Compression scheme based on Burrows Wheeler Transform is usually lossless ; therefore we imple-ment this algorithm in medical imaging in order to reconstruct every bit. Many vari-ants of the three stages which form the original BWT-based compression scheme can be found in the literature. We propose an analysis of the more recent methods and the impact of their association. Then, we present several compression schemes based on this transform which significantly improve the current standards such as JPEG2000and JPEG-LS. In the final part, we present some open problems which are also further research directions
APA, Harvard, Vancouver, ISO, and other styles
19

Reid, Mark Montgomery. "Path-dictated, lossless volumetric data compression." Thesis, University of Ulster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Gooch, Mark. "High performance lossless data compression hardware." Thesis, Loughborough University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nunez, Yanez Jose Luis. "Gbit/second lossless data compression hardware." Thesis, Loughborough University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

O'Donnell, Michael. "Lossy compression of speech using perceptual criteria." Thesis, University of Central Lancashire, 1998. http://clok.uclan.ac.uk/20360/.

Full text
Abstract:
The research contained in this thesis provides an investigation into a new method of minimising the perceptual differences when encoding digitised speech. An application of the perceptual criteria is described in the context of a codebook encoding methodology Some of the background studies covered aspects of psychoacoustics, in particular the effects of the human outer, middle and inner ear. Models approximating each region of the ear are utilised and concatenated into a single overall auditory response path model. As the objective of the research is to encode and decode speech waveforms, some study into how speech is produced and the classification of speech sounds is required. From this there is a description of a basic speech production model which is modelled as a digital filter. A review of the main categories for coding schemes that are currently employed is presented along with commonly used coding methods. In particular the codebook coding method is reviewed in sufficient detail to contrast with the new coding method. The development of a new perceptual minimisation criterion which relies on dual application of the auditory response path model on the original and reconstructed speech waveforms is described. In this the ordering of eodebook searches, the frequency spectrum used as the search target, windowing functions with durations and placement are all analysed to determine the optimum encoder design. Also described are a number of prospective gain algorithms which cover both time and frequency domain implementations. A new encoder is constructed which fully integrates the new perceptual criterion into the minimisation of the original and reconstructed speech waveforms. In the minimisation no part of the traditional encoder method is used, however both methods use a similar technique for determining gain factors. Speech derived from both encoders was subjectively assessed by a number of untrained, independent listeners. The results presented show that both methods are comparable but there is a slight preference towards the traditional encoder. A measure of the complexity indicated that the new minimisation method is also more complex than the traditional encoder.
APA, Harvard, Vancouver, ISO, and other styles
23

Shahid, Hiba. "Lossy color image compression based on quantization." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54781.

Full text
Abstract:
Significant improvements that are based on quantization can be made in the area of lossy image compression. In this thesis, we propose two effective quantization based algorithms for compressing two-dimensional RGB images. The first method is based on a new codebook generation algorithm which, unlike existing VQ methods. It forms the codewords directly in one step, using the less common local information in every image in the training set. Existing methods form an initial codebook and update it iteratively, by taking the average of vectors representing small image regions. Another distinguishing aspect of this method is the use of non-overlapping hexagonal blocks instead of the traditional rectangular blocks, to partition the images. We demonstrate how the codewords are extracted from such blocks. We also measure the separate contribution of each of the two distinguishing aspects of the proposed codebook generation algorithm. The second proposed method, unlike all known VQ algorithms, does not use training images or a codebook. It is implemented to further improve the image quality resulting from the proposed codebook generation algorithm. In this algorithm, the representative pixels (i.e. those that represent pixels that are perceptually similar) are extracted from sets of perceptually similar color pixels. Also, the image pixels are reconstructed using the closest representative pixels. There are two main differences between this algorithm and the existing lossy compression algorithms including our proposed codebook generation algorithm. The first is that this algorithm exploits the image pixel correlation according to the viewer’s perception and not according to how close it is numerically to its neighboring pixels. The second difference is that this algorithm does not reconstruct an entire image block simultaneously using a codeword. Each pixel is reconstructed such that it has the value of its closest representative pixel.<br>Applied Science, Faculty of<br>Electrical and Computer Engineering, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Lopez-Hernandez, Roberto. "Lossless compression for computer generated animation sequences." Thesis, University of Warwick, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Atek, S. "Lossless compression for on-board satellite imaging." Thesis, University of Surrey, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Neves, António José Ribeiro. "Lossless compression of images with specific characteristics." Doctoral thesis, Universidade de Aveiro, 2007. http://hdl.handle.net/10773/2210.

Full text
Abstract:
Doutoramento em Engenharia Electrotécnica<br>A compressão de certos tipos de imagens é um desafio para algumas normas de compressão de imagem. Esta tese investiga a compressão sem perdas de imagens com características especiais, em particular imagens simples, imagens de cor indexada e imagens de microarrays. Estamos interessados no desenvolvimento de métodos de compressão completos e no estudo de técnicas de pré-processamento que possam ser utilizadas em conjunto com as normas de compressão de imagem. A esparsidade do histograma, uma propriedade das imagens simples, é um dos assuntos abordados nesta tese. Desenvolvemos uma técnica de pré-processamento, denominada compactação de histogramas, que explora esta propriedade e que pode ser usada em conjunto com as normas de compressão de imagem para um melhoramento significativo da eficiência de compressão. A compactação de histogramas e os algoritmos de reordenação podem ser usados como préprocessamento para melhorar a compressão sem perdas de imagens de cor indexada. Esta tese apresenta vários algoritmos e um estudo abrangente dos métodos já existentes. Métodos específicos, como é o caso da decomposição em árvores binárias, são também estudados e propostos. O uso de microarrays em biologia encontra-se em franca expansão. Devido ao elevado volume de dados gerados por experiência, são necessárias técnicas de compressão sem perdas. Nesta tese, exploramos a utilização de normas de compressão sem perdas e apresentamos novos algoritmos para codificar eficientemente este tipo de imagens, baseados em modelos de contexto finito e codificação aritmética.<br>The compression of some types of images is a challenge for some standard compression techniques. This thesis investigates the lossless compression of images with specific characteristics, namely simple images, color-indexed images and microarray images. We are interested in the development of complete compression methods and in the study of preprocessing algorithms that could be used together with standard compression methods. The histogram sparseness, a property of simple images, is addressed in this thesis. We developed a preprocessing technique, denoted histogram packing, that explores this property and can be used with standard compression methods for improving significantly their efficiency. Histogram packing and palette reordering algorithms can be used as a preprocessing step for improving the lossless compression of color-indexed images. This thesis presents several algorithms and a comprehensive study of the already existing methods. Specific compression methods, such as binary tree decomposition, are also addressed. The use of microarray expression data in state-of-the-art biology has been well established and due to the significant volume of data generated per experiment, efficient lossless compression methods are needed. In this thesis, we explore the use of standard image coding techniques and we present new algorithms to efficiently compress this type of images, based on finite-context modeling and arithmetic coding.
APA, Harvard, Vancouver, ISO, and other styles
27

Krejčí, Michal. "Komprese dat." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217934.

Full text
Abstract:
This thesis deals with lossless and losing methods of data compressions and their possible applications in the measurement engineering. In the first part of the thesis there is a theoretical elaboration which informs the reader about the basic terminology, the reasons of data compression, the usage of data compression in standard practice and the division of compression algorithms. The practical part of thesis deals with the realization of the compress algorithms in Matlab and LabWindows/CVI.
APA, Harvard, Vancouver, ISO, and other styles
28

Chin, Bernard Yiow-Min. "Two algorithms for Lossy compression of 3D images." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38746.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.<br>Includes bibliographical references (leaves 107-108).<br>by Bernard Yiow-Min Chin.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
29

Worek, Brian David. "Enabling Approximate Storage through Lossy Media Data Compression." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/87561.

Full text
Abstract:
Memory capacity, bandwidth, and energy all continue to present hurdles in the quest for efficient, high-speed computing. Recognition, mining, and synthesis (RMS) applications in particular are limited by the efficiency of the memory subsystem due to their large datasets and need to frequently access memory. RMS applications, such as those in machine learning, deliver intelligent analysis and decision making through their ability to learn, identify, and create complex data models. To meet growing demand for RMS application deployment in battery constrained devices, such as mobile and Internet-of-Things, designers will need novel techniques to improve system energy consumption and performance. Fortunately, many RMS applications demonstrate inherent error resilience, a property that allows them to produce acceptable outputs even when data used in computation contain errors. Approximate storage techniques across circuits, architectures, and algorithms exploit this property to improve the energy consumption and performance of the memory subsystem through quality-energy scaling. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data.<br>MS
APA, Harvard, Vancouver, ISO, and other styles
30

Bhupathiraju, Kalyan Varma. "Empirical analysis of BWT-based lossless image compression." Morgantown, W. Va. : [West Virginia University Libraries], 2010. http://hdl.handle.net/10450/10958.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2010.<br>Title from document title page. Document formatted into pages; contains v, 61 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 54-56).
APA, Harvard, Vancouver, ISO, and other styles
31

Urbánek, Pavel. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236385.

Full text
Abstract:
This thesis is focused on subject of image compression using wavelet transform. The first part of this document provides reader with information about image compression, presents well known contemporary algorithms and looks into details of wavelet compression and following encoding schemes. Both JPEG and JPEG 2000 standards are introduced. Second part of this document analyzes and describes implementation of image compression tool including inovations and optimalizations. The third part is dedicated to comparison and evaluation of achievements.
APA, Harvard, Vancouver, ISO, and other styles
32

Nicholl, Peter Nigel. "Feature directed spiral image compression : (a new technique for lossless image compression)." Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Boström, Kim. "Lossless quantum data compression and secure direct communication." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=971991723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Boström, Kim. "Lossless quantum data compression and secure direct communication." Phd thesis, Universität Potsdam, 2004. http://opus.kobv.de/ubp/volltexte/2005/100/.

Full text
Abstract:
Diese Dissertation behandelt die Kodierung und Verschickung von Information durch einen Quantenkanal. Ein Quantenkanal besteht aus einem quantenmechanischen System, welches vom Sender manipuliert und vom Empfänger ausgelesen werden kann. Dabei repräsentiert der individuelle Zustand des Kanals die Nachricht. <br /> <br /> Die zwei Themen der Dissertation umfassen 1) die Möglichkeit, eine Nachricht in einem Quantenkanal verlustfrei zu komprimieren und 2) die Möglichkeit eine Nachricht von einer Partei zu einer einer anderen direkt und auf sichere Weise zu übermitteln, d.h. ohne dass es einer dritte Partei möglich ist, die Nachricht abzuhören und dabei unerkannt zu bleiben.<br /> <br /> Die wesentlichen Ergebnisse der Dissertation sind die folgenden. <br /> Ein allgemeiner Formalismus für Quantencodes mit variabler Länge wird ausgearbeitet. Diese Codes sind notwendig um verlustfreie Kompression zu ermöglichen. Wegen der Quantennatur des Kanals sind die codierten Nachrichten allgemein in einer Superposition von verschiedenen Längen. Es zeigt sich, daß es unmöglich ist eine Quantennachricht verlustfrei zu komprimieren, wenn diese dem Sender nicht apriori bekannt ist. Im anderen Falle wird die Möglichkeit verlustfreier Quantenkompression gezeigt und eine untere Schranke für die Kompressionsrate abgeleitet. Des weiteren wird ein expliziter Kompressionsalgorithmus konstruiert, der für beliebig vorgegebene Ensembles aus Quantennachrichten funktioniert.<br /> <br /> Ein quantenkryptografisches Prokoll - das &ldquo;Ping-Pong Protokoll&rdquo; - wird vorgestellt, welches die sichere direkte übertragung von klassischen Nachrichten durch einen Quantenkanal ermöglicht. Die Sicherheit des Protokolls gegen beliebige Abhörangriffe wird bewiesen für den Fall eines idealen Quantenkanals. Im Gegensatz zu anderen quantenkryptografischen Verfahren ist das Ping-Pong Protokoll deterministisch und kann somit sowohl für die Übermittlung eines zufälligen Schlüssels als auch einer komponierten Nachricht verwendet werden. Das Protokoll is perfekt sicher für die Übertragung eines Schlüssels und quasi-sicher für die direkte Übermittlung einer Nachricht. Letzteres bedeutet, dass die Wahrscheinlichkeit eines erfolgreichen Abhörangriffs exponenziell mit der Länge der Nachricht abnimmt.<br>This thesis deals with the encoding and transmission of information through a quantum channel. A quantum channel is a quantum mechanical system whose state is manipulated by a sender and read out by a receiver. The individual state of the channel represents the message.<br /> <br /> The two topics of the thesis comprise 1) the possibility of compressing a message stored in a quantum channel without loss of information and 2) the possibility to communicate a message directly from one party to another in a secure manner, that is, a third party is not able to eavesdrop the message without being detected.<br /> <br /> The main results of the thesis are the following. <br /> A general framework for variable-length quantum codes is worked out. These codes are necessary to make lossless compression possible. Due to the quantum nature of the channel, the encoded messages are in general in a superposition of different lengths. It is found to be impossible to compress a quantum message without loss of information if the message is not apriori known to the sender. In the other case it is shown that lossless quantum data compression is possible and a lower bound on the compression rate is derived. Furthermore, an explicit compression scheme is constructed that works for arbitrarily given source message ensembles. <br /> <br /> A quantum cryptographic protocol - the &ldquo;ping-pong protocol&rdquo; - is presented that realizes the secure direct communication of classical messages through a quantum channel. The security of the protocol against arbitrary eavesdropping attacks is proven for the case of an ideal quantum channel. In contrast to other quantum cryptographic protocols, the ping-pong protocol is deterministic and can thus be used to transmit a random key as well as a composed message. <br /> The protocol is perfectly secure for the transmission of a key, and it is quasi-secure for the direct transmission of a message. The latter means that the probability of successful eavesdropping exponentially decreases with the length of the message.
APA, Harvard, Vancouver, ISO, and other styles
35

Blandon, Julio Cesar. "A novel lossless compression technique for text data." FIU Digital Commons, 1999. http://digitalcommons.fiu.edu/etd/1694.

Full text
Abstract:
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
APA, Harvard, Vancouver, ISO, and other styles
36

Stratford, Barney. "A formal treatment of lossless data compression algorithms." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

PINHO, MARCELO DA SILVA. "UNIVERSAL LOSSLESS DATA COMPRESSION WITH FINITE STATE ENCODERS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1996. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8846@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO<br>Neste trabalho é estudado o problema da compressão de dados por codificadores de estado finito e sem perda de informação. O problema é dividido em três partes: compressão de seqüências individuais, compressão de pares de seqüências e compressão de imagens. A principal motivação do trabalho é o estudo da compressão de pares de seqüências, como um passo intermediário para o entendimento do problema da compressão de dados bidimensionais. Para cada um dos casos é definido um limitante inferior para a taxa de compressão de qualquer codificador de estado finito e sem perda de informação. Para os três casos, codificadores universais são propostos e seus desempenhos são analisados. Os codificadores propostos foram implementados em software e aplicados à compressão de seqüências finitas, pares de seqüências finitas e imagens finitas. Os resultados de simulação obtidos são analisados.<br>In this work the problem of data compression by finite- state and information lossless encorders is studied. The problem is divided in three parts: compression of individual sequences, compression of pairs of sequences and compression of images. For each of these, a lower bound is defined which sets a limit on the smaller compression rate that can be achieved by any finite-state and information lossless enconders. Universal encorders are proposed and their performance compared to the optimal attainable. The proposed encoders were implemented in software and used to compress finite sequences, pairs of finite sequences and finite images. The simulation results are analysed.
APA, Harvard, Vancouver, ISO, and other styles
38

Feng, Hsin-Chang. "Validation for Visually lossless Compression of Stereo Images." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579705.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV<br>This paper described the details of subjective validation for visually lossless compression of stereoscopic 3 dimensional (3D) images. The subjective testing method employed in this work is adapted from methods used previously for visually lossless compression of 2 dimensional (2D) images. Confidence intervals on the correct response rate obtained from the subjective validation of compressed stereo pairs provide reliable evidence to indicate that the compressed stereo pairs are visually lossless.
APA, Harvard, Vancouver, ISO, and other styles
39

Amrani, Naoufal, Joan Serra-Sagrista, Miguel Hernandez-Cabronero, and Michael Marcellin. "Regression Wavelet Analysis for Progressive-Lossy-to-Lossless Coding of Remote-Sensing Data." IEEE, 2016. http://hdl.handle.net/10150/623190.

Full text
Abstract:
Regression Wavelet Analysis (RWA) is a novel wavelet-based scheme for coding hyperspectral images that employs multiple regression analysis to exploit the relationships among spectral wavelet transformed components. The scheme is based on a pyramidal prediction, using different regression models, to increase the statistical independence in the wavelet domain For lossless coding, RWA has proven to be superior to other spectral transform like PCA and to the best and most recent coding standard in remote sensing, CCSDS-123.0. In this paper we show that RWA also allows progressive lossy-to-lossless (PLL) coding and that it attains a rate-distortion performance superior to those obtained with state-of-the-art schemes. To take into account the predictive significance of the spectral components, we propose a Prediction Weighting scheme for JPEG2000 that captures the contribution of each transformed component to the prediction process.
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Zhongwei. "Diagnostically lossless compression strategies for x-ray angiography images." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/308327.

Full text
Abstract:
En las últimas décadas se han producido mejoras significativas en las técnicas de imagen médica. Hoy en día, el uso de estas técnicas es habitual en la mayoría de sistemas sanitarios, y las imágenes producidas forman parte integral de las fichas de los pacientes. De entre las modalidades de imagen médica habitualmente empleadas, los rayos X es una de las más populares gracias a su bajo coste, alta resolución y su excelente capacidad para penetrar dentro de los tejidos. Dentro de la familia de la imagen de rayos X, las angiografías de rayos X --las cuales emplean cateterización minimamente invaisva-- se emplean rutinariamente para detectar irregularidades en el sistema vascular. Las imágenes de angiografías de rayos X se pueden clasificar en dos typos: angiografía de rayos X general (GXA) ,las cuales presentan los vasos sanguíneos de diferentes partes del cuerpo como brazos, piernas, pies, etc., y las secuencias de video de angiogramas coronarios (CAVSs), las cuales muestran solo los árboles de los vasos coronarios para el diagnóstico de enfermedades cardiovasculares. Dadas las diferencias en cuanto a función, estos dos tipos de imagen presentan características muy diferentes. Las imágenes GXA suelen poseer una alta resolución espacial, pero una baja resolución temporal. Por otro lado, las CAVSs suelen tener una resolución espacial más baja pero una resolución temporal mucho mayor. Debido al número creciente de estudios médicos que emplean angiogramas de rayos X, surge una necesidad de almacenar y compartir las imágenes producidas, por lo que la compresión de las mismas se está convirtiendo en una tarea crítica. La compresión con pérdida tiene la ventaja de una gran capacidad de reducción del tamaño del fichero comprimido, pero en general se rechaza en la comunidad médico debido a que los cambios introducidos en las imágenes podrían afectar al proceso de diagnóstico. Por otro lado, la compresión sin pérdida garantiza una fidelidad de datos perfecta, pero resulta en ratios de compresión menores. Por última. la compresión sin pérdida en el diagnóstico se está convirtiendo en la opción preferida dado que permite obtener ratios de compresión mejores que la compresión puramente sin pérdida, sin sacrificar excesiva precisión en los procesos de diagnóstico. En la compresión sin pérdida en el diagnóstico, los datos clínicamente relevantes se comprimen sin pérdida, mientras que los datos irrelevantes para el diagnóstico se comprimen con algo de pérdida. En este escenario, identificar las zonas relevantes y no relevantes para el diagnóstico es la primera etapa, y además la más importante en este tipo de compresión. En esta tesis se desarrollan dos estrategias de compresión sin pérdida en el diagnóstico. La primera se propone para imágenes GXA. La segunda, para CAVSs. La técnica para imágenes GXA identifica primero el área focal relevante y después se aplican métodos de supresión de fondo (background) para mejorar el rendimiento de la compresión. La técnica para imágenes CAVSs se ha implementado para reconocer los cuadros (frames) que no contienen estructuras de vasos sanguíneos visibles. Estos cuadros se comprimen con pérdida, mientras que el resto se comprimen sin pérdida. Se han probado varias técnicas de compresión para cada tipo de imágenes, incluyendo standars compatibles con DICOM como JPEG2000, JPEG-LS, H.264/AVC, y el último estandard de compresión de vídeo HEVC. En JPEG2000, la compresión multicomponente y la compresión progresiva también se han evaluado. Los resultados experimentales indican que las dos técnicas arriba descritas son capaces de detectar los datos relevantes para el diagnóstico. En cuanto a los resultados de compresión, la técnica propuesta para imágenes GXA obtiene reducciones de tamaño de hasta el 34% y mejoras en la reconstrucción progresiva de hasta 20~dB de SNR. La técnica para CAVSs produce resultados de compresión un 19% mejores, en comparación con las técnicas de compresión sin pérdida.<br>The past several decades have witnessed a major evolution in medical imaging techniques, making medical images become commonplace in healthcare systems and an integral part of a patient medical record. Among the existing medical imaging modalities, X-ray imaging is one of the most popular technologies due to its low cost, high resolution and excellent capability to penetrate deep within tissue. In particular, X-ray angiographies --which use minimally invasive catheterization-- and X-ray imaging are widely used to identify irregularities in the vascular system. X-ray angiography images can be classified into two types: general X-ray angiography (GXA) images, which present blood vessels in several body parts like arms, legs, foots, etc.; and coronary angiogram video squences (CAVSs), which only focus on coronary vessel trees for diagnosing cardiovascular diseases. Because of the differences in functions, these two types of images have different features: GXA images normally have high spatial resolutions (the width and height sizes) but low temporal resolution (the number of frames), while CAVSs usually have lower spatial resolutions but higher temporal resolution. Due to the increasing number of medical studies using X-ray angiography images and the need to store and share them, compression of these images is becoming critical. Lossy compression has the advantage of high data reduction capability, but it is rarely accepted by medical communities because of the modification of data that may affect the diagnosis process. Lossless compression guarantees perfect reconstruction of the medical signal, but results in low compression ratios. Diagnostically lossless compression is becoming the preferred choice, as it provides an optimal trade-off between compression performance and diagnostic accuracy. In diagnostically lossless compression, the clinically relevant data is encoded without any loss while the irrelevant data is encoded with loss. In this scenario, identifying and distinguishing the clinically relevant from the clinically irrelevant data in medical images is the first and usually most important stage in diagnostically lossless compression methods. In this thesis, two diagnostically lossless compression strategies are developed. The first one is proposed for GXA images. The second one if proposed for CAVSs. For GXA images, the clinically relevant focal area in each frame is first identified; and then a background-suppression approach is employed to increase the data redundancy of the images and hence improve the compression performance. For CAVSs, a frame-identification procedure is implemented to recognise the diagnostically unimportant frames that do not contain visible vessel structures; then, lossy compression is applied to these frames, and lossless compression is applied to the other frames. Several compression techniques have been investigated for both types of images, including the DICOM-compliant standards JPEG2000, JPEG-LS and H.264/AVC, and the latest advanced video compression standard HEVC. For JPEG2000, multicomponent-transform and progressive lossy-to-lossless coding are also tested. Experimental results suggest that both the focal-area-identification and frame-identification processes are automatic in computation and accurate in clinically relevant data identification. Regarding the compression performance, for GXA images, when compared to the case of coding with no background-suppression, the diagnostically lossless compression method achieves average bit-stream reductions of as much as 34\% and improvements on the reconstruction quality of up to 20 dB-SNR for progressive decoding; for CAVSs, the frame-identification followed by selective lossy \& lossless compression strategy achieves bit-stream reductions of more than 19\% on average as compared to lossless compression.
APA, Harvard, Vancouver, ISO, and other styles
41

Bartrina, Rapesta Joan, Victor Sanchez, Sagrsità Joan Serra, Michael W. Marcellin, Llinàs Francesc Aulí, and Ian Blanes. "Lossless medical image compression through lightweight binary arithmetic coding." SPIE-INT SOC OPTICAL ENGINEERING, 2017. http://hdl.handle.net/10150/626487.

Full text
Abstract:
A contextual lightweight arithmetic coder is proposed for lossless compression of medical imagery. Context definition uses causal data from previous symbols coded, an inexpensive yet efficient approach. To further reduce the computational cost, a binary arithmetic coder with fixed-length codewords is adopted, thus avoiding the normalization procedure common in most implementations, and the probability of each context is estimated through bitwise operations. Experimental results are provided for several medical images and compared against state-of-the-art coding techniques, yielding on average improvements between nearly 0.1 and 0.2 bps.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Xiaolin. "Algorithms and Architectures for Lossless Image and Video Compression." Thesis, University of Bristol, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Höglund, Simon. "Lightweight Real-Time Lossless Software Compression of Trace Data." Thesis, Linköpings universitet, Datorteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-179717.

Full text
Abstract:
Powerful System-on-Chip (SoC) produced today has an increasing complexity, featuringmore processors and integrated specialized hardware. This is the case with the EricssonMany-Core Architecture (EMCA) that runs the complex radio modulation standardswithin 3G, 4G and 5G. Such complicated systems require trace data to debug and verifyits behavior. Massive amounts of hardware and software traces can be produced in ashort time. Data compression is a technique to reduce the amount of memory spacerequired by reducing redundancy in the information. Compression of trace data leads toincreased throughput out of the SoC and less space required to store the data. However,it doesn’t come for free since the algorithms used for compression are computationaldemanding. This results in trade-offs between compression factor, consumed clock cyclesand occupied memory space. This master thesis investigates the possibility to compress the trace data produced inreal-time by the EMCA with a software implementation. The EMCA real-time tracearchitecture and its memory layers limit the possible software solutions. By a thoroughinvestigation of suitable compression algorithms and MATLAB experiments, the LZSSalgorithm is the given choice for the EMCA. Three different variants of the LZSS algorithmwere implemented resulting in a trade-off curve between compression factor and clockcycles. The average result of software and hardware trace data compressed ranging from1.7 to 2.4 compression factor, which is good for a lightweight SoC solution. Though, thepure software compression were quite slow as the algorithm consumed 34 to 371 clockcycles per byte encoded, to achieve the respective compression factor. The results showeda highly diminishing return in compression factor when investing more clock cycles.
APA, Harvard, Vancouver, ISO, and other styles
44

Milward, Mark John. "Investigations into hardware-based parallel lossless data compression systems." Thesis, Loughborough University, 2004. https://dspace.lboro.ac.uk/2134/33642.

Full text
Abstract:
The current increases in silicon logic densities have made feasible the implementation of multiprocessor systems onto a single chip able to meet the intensive data processing demands of highly concurrent systems. This thesis describes research into a hardware implementation of a high performance parallel multi compressor chip. In order to fully explore the design space, several models are created at various levels of abstraction to capture the full characteristics of the architecture. A detailed investigation into the performances of alternative input and output routing strategies for realistic data sets demonstrate that the design of parallel compression devices involves important trade-offs that affect compression performance, latency, and throughput. The most promising approach is written in a hardware description language and synthesised for FPGA hardware as proof of concept. It is shown that a multi compressor architecture can be a scalable solution with the ability to operate at throughputs to cope with the demands of modern high-bandwidth applications whilst retaining good compression performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Aggarwal, Viveka. "Lossless Data Compression for Security Purposes Using Huffman Encoding." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1456848208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hsiao, Ching-Wen, and 蕭景文. "Lossless and Lossy Compression Techniques for Corners and Contours." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/10082300319026280647.

Full text
Abstract:
碩士<br>國立臺灣大學<br>電信工程學研究所<br>103<br>Shape is an important feature for object recognition, template matching, and image analysis. To represent it efficiently, the techniques for encoding the corners and the contours with very low bit rate are necessary. Therefore, in this thesis, three data compression techniques are proposed in order to reduce the data size requirement for corners and contours. For corner compression, we rearrange the corners based on the distance information and use an active matrix to encode the position of corners. Moreover, the techniques of adaptive arithmetic coding and context modeling are also adopted to improve the efficiency. Besides, the techniques to compress the other information in binary feature point are also proposed. To encode the angles, the correlation between the distance information of feature points is considered. For descriptors, an asymmetric reference point selection scheme is proposed to improve the predictive coding. For lossless contour compression, the proposed algorithm first applies the morphology operation to shrink the contour slightly, and then uses similar concept to the Angle Freeman chain code. However, the chain code will be split into main-chain code and sub-chain code. From observation, the angles between consecutive directions are mostly 0 degree and 45 degrees to the right or left. To decrease the symbol diversity, the angles except for 0, 45, -45 degrees are represented as the same symbol in main-chain code, and are distinguished in another chain code. Moreover, there is some symbol substitution in main-chain code to simply the most common symbol combination owing to digital contour. After that, Huffman code is used to the main-chain code as an intermediate code according to the probability statistics of symbols, and to improve compression efficiency, the distribution transform is applied to alter the distribution of zeros and ones. Not only does the distribution of the same bit become denser, but the probability of zeros also gets higher. At last, by adaptive arithmetic coding and context modeling, the data size of the contour reduces considerably. The central concept of the proposed lossy contour compression is to approximate the original shape by a combination of vertices and polynomial curves. The vertices, which are also called dominant points, are found by the following steps. First, the initial dominant points will be chosen according to the curvature measure, and then 3rd order polynomial with optimization will be used to approximate the original contour. After calculating the error between the approximated contour and the original contours, new dominant points will be iteratively added in suitable positions until the error is tolerated. By employing the tuning around the eight-connected neighbor of dominant points, the dominant points and the polynomial coefficients will be replaced if there is some better position. When encoding the coordinates of dominant points, the concept similar to the proposed corner compression techniques is adopted. We found there is some correlation between polynomial coefficients, where the data size can be significantly decreased by improved adaptive arithmetic coding.
APA, Harvard, Vancouver, ISO, and other styles
47

楊朝勝. "Fixed distortion subband image coding based on integrated lossless/lossy compression." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/43251922277044344222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Rawat, Chandan Singh D. "Development of Some Efficient Lossless and Lossy Hybrid Image Compression Schemes." Thesis, 2015. http://ethesis.nitrkl.ac.in/6697/1/CSDR_PHD_2015.pdf.

Full text
Abstract:
Digital imaging generates a large amount of data which needs to be compressed, without loss of relevant information, to economize storage space and allow speedy data transfer. Though both storage and transmission medium capacities have been continuously increasing over the last two decades, they dont match the present requirement. Many lossless and lossy image compression schemes exist for compression of images in space domain and transform domain. Employing more than one traditional image compression algorithms results in hybrid image compression techniques. Based on the existing schemes, novel hybrid image compression schemes are developed in this doctoral research work, to compress the images effectually maintaining the quality.
APA, Harvard, Vancouver, ISO, and other styles
49

Pai, Shang-Chin, and 白上勤. "Lossless and Lossy Image Compression Methods Based on the Statistics of Data Patterns." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/03454920227449228717.

Full text
Abstract:
碩士<br>朝陽科技大學<br>資訊管理系碩士班<br>92<br>This thesis proposes two grey-level image compression methods. One is a lossless image compression technique called a multiple models for the probabilities of patterns (MMPP) method; the other is a loss image compression technique named a piecewise based VQ codebook generating (GBVQCG) method. The MMPP method employs a median edge detector (MED) to reduce the entropy rate of a grey-level image; then it decreases the color value of each pixel in the image by using base switching transformation (BST) on the basis of the similarity of its adjacent pixels. To promote the efficiency in memory space, the MMPP method classifies the data generated in the MED processing and the BST stages into groups according to the properties of their data patterns. Finally, it applies an arithmetic encoding method to further compress the data in each group respectively. The GBVQCG method partitions an image into non-overlapped image blocks, classifies the image blocks into groups, and then specifies the number of codewords which should be generated from the image blocks in each group. The number of codewords will be decided according to the standard deviation and the number of the image blocks in the group. The experimental results show that the MMPP method mostly provides a higher efficiency in storage space than the lossless JPEG 2000 method. The GBVQCG generally gives a better performance in running time and the quality of the decompressed image than the LBG algorithm as well.
APA, Harvard, Vancouver, ISO, and other styles
50

Han, Dan. "A new progressive lossy-to-lossless coding method for 2.5-D triangle meshes with arbitrary connectivity." Thesis, 2016. http://hdl.handle.net/1828/7614.

Full text
Abstract:
A new progressive lossy-to-lossless coding framework for 2.5-dimensional (2.5-D) triangle meshes with arbitrary connectivity is proposed by combining ideas from the previously proposed average-difference image-tree (ADIT) method and the Peng-Kuo (PK) method with several modifications. The proposed method represents the 2.5-D triangle mesh with a binary tree data structure, and codes the tree by a top-down traversal. The proposed framework contains several parameters. Many variations are tried in order to find a good choice for each parameter considering both the lossless and progressive coding performance. Based on extensive experimentation, we recommend a particular set of best choices to be used for these parameters, leading to the mesh-coding method proposed herein<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography