Academic literature on the topic 'Lossless and Lossy compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Lossless and Lossy compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Lossless and Lossy compression"

1

Yu, Rongshan, and Wenxian Yang. "ScaleQC: a scalable lossy to lossless solution for NGS data compression." Bioinformatics 36, no. 17 (2020): 4551–59. http://dx.doi.org/10.1093/bioinformatics/btaa543.

Full text
Abstract:
Abstract Motivation Per-base quality values in Next Generation Sequencing data take a significant portion of storage even after compression. Lossy compression technologies could further reduce the space used by quality values. However, in many applications, lossless compression is still desired. Hence, sequencing data in multiple file formats have to be prepared for different applications. Results We developed a scalable lossy to lossless compression solution for quality values named ScaleQC (Scalable Quality value Compression). ScaleQC is able to provide the so-called bit-stream level scalability that the losslessly compressed bit-stream by ScaleQC can be further truncated to lower data rates without incurring an expensive transcoding operation. Despite its scalability, ScaleQC still achieves comparable compression performance at both lossless and lossy data rates compared to the existing lossless or lossy compressors. Availability and implementation ScaleQC has been integrated with SAMtools as a special quality value encoding mode for CRAM. Its source codes can be obtained from our integrated SAMtools (https://github.com/xmuyulab/samtools) with dependency on integrated HTSlib (https://github.com/xmuyulab/htslib). Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
2

Eldstål-Ahrens, Albin, Angelos Arelakis, and Ioannis Sourdis. "L 2 C: Combining Lossy and Lossless Compression on Memory and I/O." ACM Transactions on Embedded Computing Systems 21, no. 1 (2022): 1–27. http://dx.doi.org/10.1145/3481641.

Full text
Abstract:
In this article, we introduce L 2 C, a hybrid lossy/lossless compression scheme applicable both to the memory subsystem and I/O traffic of a processor chip. L 2 C employs general-purpose lossless compression and combines it with state-of-the-art lossy compression to achieve compression ratios up to 16:1 and to improve the utilization of chip’s bandwidth resources. Compressing memory traffic yields lower memory access time, improving system performance, and energy efficiency. Compressing I/O traffic offers several benefits for resource-constrained systems, including more efficient storage and networking. We evaluate L 2 C as a memory compressor in simulation with a set of approximation-tolerant applications. L 2 C improves baseline execution time by an average of 50% and total system energy consumption by 16%. Compared to the lossy and lossless current state-of-the-art memory compression approaches, L 2 C improves execution time by 9% and 26%, respectively, and reduces system energy costs by 3% and 5%, respectively. I/O compression efficacy is evaluated using a set of real-life datasets. L 2 C achieves compression ratios of up to 10.4:1 for a single dataset and on average about 4:1, while introducing no more than 0.4% error.
APA, Harvard, Vancouver, ISO, and other styles
3

Magar, Satyawati, and Bhavani Sridharan. "Comparative analysis of various Image compression techniques for Quasi Fractal lossless compression." International Journal of Computer Communication and Informatics 2, no. 2 (2020): 30–45. http://dx.doi.org/10.34256/ijcci2024.

Full text
Abstract:
The most important Entity to be considered in Image Compression methods are Paek to signal noise ratio and Compression ratio. These two parameters are considered to judge the quality of any Image.and they a play vital role in any Image processing applications. Biomedical domain is one of the critical areas where more image datasets are involved for analysis and biomedical image compression is very, much essential. Basically, compression techniques are classified into lossless and lossy. As the name indicates, in the lossless technique the image is compressed without any loss of data. But in the lossy, some information may loss. Here both lossy & lossless techniques for an image compression are used. In this research different compression approaches of these two categories are discussed and brain images for compression techniques are highlighted. Both lossy and lossless techniques are implemented by studying it’s advantages and disadvantages. For this research two important quality parameters i.e. CR & PSNR are calculated. Here existing techniques DCT, DFT, DWT & Fractal are implemented and introduced new techniques i.e Oscillation Concept method, BTC-SPIHT & Hybrid technique using adaptive threshold & Quasi Fractal Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (2023): 36–42. http://dx.doi.org/10.46610/joits.2022.v09i01.005.

Full text
Abstract:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
APA, Harvard, Vancouver, ISO, and other styles
5

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (2023): 36–42. http://dx.doi.org/10.46610/joits.2023.v09i01.005.

Full text
Abstract:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
APA, Harvard, Vancouver, ISO, and other styles
6

Hayati, Anis Kamilah, and Haris Suka Dyatmika. "THE EFFECT OF JPEG2000 COMPRESSION ON REMOTE SENSING DATA OF DIFFERENT SPATIAL RESOLUTIONS." International Journal of Remote Sensing and Earth Sciences (IJReSES) 14, no. 2 (2018): 111. http://dx.doi.org/10.30536/j.ijreses.2017.v14.a2724.

Full text
Abstract:
The huge size of remote sensing data implies the information technology infrastructure to store, manage, deliver and process the data itself. To compensate these disadvantages, compressing technique is a possible solution. JPEG2000 compression provide lossless and lossy compression with scalability for lossy compression. As the ratio of lossy compression getshigher, the size of the file reduced but the information loss increased. This paper tries to investigate the JPEG2000 compression effect on remote sensing data of different spatial resolution. Three set of data (Landsat 8, SPOT 6 and Pleiades) processed with five different level of JPEG2000 compression. Each set of data then cropped at a certain area and analyzed using unsupervised classification. To estimate the accuracy, this paper utilized the Mean Square Error (MSE) and the Kappa coefficient agreement. The study shows that compressed scenes using lossless compression have no difference with uncompressed scenes. Furthermore, compressed scenes using lossy compression with the compression ratioless than 1:10 have no significant difference with uncompressed data with Kappa coefficient higher than 0.8.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaur, Harjit. "Image Compression Techniques with LZW method." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (2022): 1773–77. http://dx.doi.org/10.22214/ijraset.2022.39999.

Full text
Abstract:
Abstract: Image compression is a technique which is used to reduce the size of the data. In other words, it means to remove the extra data from the available by applying some techniques and tricks which makes the data easy for storing and transmitting it over the transmission medium. The compression techniques are broadly divided into two categories. First one is Lossy Compression in which some of the data is lost while compressing it and second technique is lossless technique in which data is not lost after compressing it. These compression techniques can be applied on different image formats. This review paper compares the different compression techniques. Keywords: lossy, lossless, image formats, compression techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

James, Ian. "Webwaves: Lossless vs lossy compression." Preview 2020, no. 205 (2020): 42. http://dx.doi.org/10.1080/14432471.2020.1751792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Kangli, and Wei Gao. "UniPCGC: Towards Practical Point Cloud Geometry Compression via an Efficient Unified Approach." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 12721–29. https://doi.org/10.1609/aaai.v39i12.33387.

Full text
Abstract:
Learning-based point cloud compression methods have made significant progress in terms of performance. However, these methods still encounter challenges including high complexity, limited compression modes, and a lack of support for variable rate, which restrict the practical application of these methods. In order to promote the development of practical point cloud compression, we propose an efficient unified point cloud geometry compression framework, dubbed as UniPCGC. It is a lightweight framework that supports lossy compression, lossless compression, variable rate and variable complexity. First, we introduce the Uneven 8-Stage Lossless Coder (UELC) in the lossless mode, which allocates more computational complexity to groups with higher coding difficulty, and merges groups with lower coding difficulty. Second, Variable Rate and Complexity Module (VRCM) is achieved in the lossy mode through joint adoption of a rate modulation module and dynamic sparse convolution. Finally, through the dynamic combination of UELC and VRCM, we achieve lossy compression, lossless compression, variable rate and complexity within a unified framework. Compared to the previous state-of-the-art method, our method achieves a compression ratio (CR) gain of 8.1% on lossless compression, and a Bjontegaard Delta Rate (BD-Rate) gain of 14.02% on lossy compression, while also supporting variable rate and variable complexity.
APA, Harvard, Vancouver, ISO, and other styles
10

Gunawan, Teddy Surya, Muhammad Khalif Mat Zain, Fathiah Abdul Muin, and Mira Kartiwi. "Investigation of Lossless Audio Compression using IEEE 1857.2 Advanced Audio Coding." Indonesian Journal of Electrical Engineering and Computer Science 6, no. 2 (2017): 422. http://dx.doi.org/10.11591/ijeecs.v6.i2.pp422-430.

Full text
Abstract:
<p>Audio compression is a method of reducing the space demand and aid transmission of the source file which then can be categorized by lossy and lossless compression. Lossless audio compression was considered to be a luxury previously due to the limited storage space. However, as storage technology progresses, lossless audio files can be seen as the only plausible choice for those seeking the ultimate audio quality experience. There are a lot of commonly used lossless codecs are FLAC, Wavpack, ALAC, Monkey Audio, True Audio, etc. The IEEE Standard for Advanced Audio Coding (IEEE 1857.2) is a new standard approved by IEEE in 2013 that covers both lossy and lossless audio compression tools. A lot of research has been done on this standard, but this paper will focus more on whether the IEEE 1857.2 lossless audio codec to be a viable alternative to other existing codecs in its current state. Therefore, the objective of this paper is to investigate the codec’s operation as initial measurements performed by researchers show that the lossless compression performance of the IEEE compressor is better than any traditional encoders, while the encoding speed is slower which can be further optimized.</p>
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Lossless and Lossy compression"

1

Hernandez-Cabronero, Miguel, Ian Blanes, Armando J. Pinho, Michael W. Marcellin, and Joan Serra-Sagrista. "Progressive Lossy-to-Lossless Compression of DNA Microarray Images." IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2016. http://hdl.handle.net/10150/615540.

Full text
Abstract:
The analysis techniques applied to DNA microarray images are under active development. As new techniques become available, it will be useful to apply them to existing microarray images to obtain more accurate results. The compression of these images can be a useful tool to alleviate the costs associated to their storage and transmission. The recently proposed Relative Quantizer (RQ) coder provides the most competitive lossy compression ratios while introducing only acceptable changes in the images. However, images compressed with the RQ coder can only be reconstructed with a limited quality, determined before compression. In this work, a progressive lossy-to-lossless scheme is presented to solve this problem. First, the regular structure of the RQ intervals is exploited to define a lossy-to-lossless coding algorithm called the Progressive RQ (PRQ) coder. Second, an enhanced version that prioritizes a region of interest, called the PRQ-region of interest (ROI) coder, is described. Experiments indicate that the PRQ coder offers progressivity with lossless and lossy coding performance almost identical to the best techniques in the literature, none of which is progressive. In turn, the PRQ-ROI exhibits very similar lossless coding results with better rate-distortion performance than both the RQ and PRQ coders.
APA, Harvard, Vancouver, ISO, and other styles
2

Kodukulla, Surya Teja. "Lossless Image compression using MATLAB : Comparative Study." Thesis, Blekinge Tekniska Högskola, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20038.

Full text
Abstract:
Context: Image compression is one of the key and important applicationsin commercial, research, defence and medical fields. The largerimage files cannot be processed or stored quickly and efficiently. Hencecompressing images while maintaining the maximum quality possibleis very important for real-world applications. Objectives: Lossy compression is widely popular for image compressionand used in commercial applications. In order to perform efficientwork related to images, the quality in many situations needs to be highwhile having a comparatively low file size. Hence lossless compressionalgorithms are used in this study to compare the lossless algorithmsand to check which algorithm makes the compression retaining thequality with decent compression ratio. Method: The lossless algorithms compared are LZW, RLE, Huffman,DCT in lossless mode, DWT. The compression techniques areimplemented in MATLAB by using image processing toolbox. Thecompressed images are compared for subjective image quality. The imagesare compressed with emphasis on maintaining the quality ratherthan focusing on diminishing file size. Result: The LZW algorithm compression produces binary imagesfailing in this implementation to produce a lossless image. Huffmanand RLE algorithms produce similar results with compression ratiosin the range of 2.5 to 3.7, and the algorithms are based on redundancyreduction. The DCT and DWT algorithms compress every elementin the matrix defined for the images maintaining lossless quality withcompression ratios in the range 2 to 3.5. Conclusion: The DWT algorithm is best suitable for a more efficientway to compress an image in a lossless technique. As the wavelets areused in this compression, all the elements in the image are compressedwhile retaining the quality. The Huffman and RLE produce losslessimages, but for a large variety of images, some of the images may notbe compressed with complete efficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Abbott, Walter D. "A simple, low overhead data compression algorithm for converting lossy processes to lossless." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA277905.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, December 1993.<br>Thesis advisor(s): Ron J. Pieper. "December 1993." Cover title: A simple, ... lossy compression processes ... Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
4

Wilhelmy, Jochen [Verfasser], and Willi A. [Akademischer Betreuer] Kalender. "Lossless and Lossy Raw Data Compression in CT Imaging / Jochen Wilhelmy. Betreuer: Willi A. Kalender." Erlangen : Universitätsbibliothek der Universität Erlangen-Nürnberg, 2012. http://d-nb.info/1029374414/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Yi. "Codage d'images avec et sans pertes à basse complexité et basé contenu." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0028/document.

Full text
Abstract:
Ce projet de recherche doctoral vise à proposer solution améliorée du codec de codage d’images LAR (Locally Adaptive Resolution), à la fois d’un point de vue performances de compression et complexité. Plusieurs standards de compression d’images ont été proposés par le passé et mis à profit dans de nombreuses applications multimédia, mais la recherche continue dans ce domaine afin d’offrir de plus grande qualité de codage et/ou de plus faibles complexité de traitements. JPEG fut standardisé il y a vingt ans, et il continue pourtant à être le format de compression le plus utilisé actuellement. Bien qu’avec de meilleures performances de compression, l’utilisation de JPEG 2000 reste limitée due à sa complexité plus importe comparée à JPEG. En 2008, le comité de standardisation JPEG a lancé un appel à proposition appelé AIC (Advanced Image Coding). L’objectif était de pouvoir standardiser de nouvelles technologies allant au-delà des standards existants. Le codec LAR fut alors proposé comme réponse à cet appel. Le système LAR tend à associer une efficacité de compression et une représentation basée contenu. Il supporte le codage avec et sans pertes avec la même structure. Cependant, au début de cette étude, le codec LAR ne mettait pas en oeuvre de techniques d’optimisation débit/distorsions (RDO), ce qui lui fut préjudiciable lors de la phase d’évaluation d’AIC. Ainsi dans ce travail, il s’agit dans un premier temps de caractériser l’impact des principaux paramètres du codec sur l’efficacité de compression, sur la caractérisation des relations existantes entre efficacité de codage, puis de construire des modèles RDO pour la configuration des paramètres afin d’obtenir une efficacité de codage proche de l’optimal. De plus, basée sur ces modèles RDO, une méthode de « contrôle de qualité » est introduite qui permet de coder une image à une cible MSE/PSNR donnée. La précision de la technique proposée, estimée par le rapport entre la variance de l’erreur et la consigne, est d’environ 10%. En supplément, la mesure de qualité subjective est prise en considération et les modèles RDO sont appliqués localement dans l’image et non plus globalement. La qualité perceptuelle est visiblement améliorée, avec un gain significatif mesuré par la métrique de qualité objective SSIM. Avec un double objectif d’efficacité de codage et de basse complexité, un nouveau schéma de codage LAR est également proposé dans le mode sans perte. Dans ce contexte, toutes les étapes de codage sont modifiées pour un meilleur taux de compression final. Un nouveau module de classification est également introduit pour diminuer l’entropie des erreurs de prédiction. Les expérimentations montrent que ce codec sans perte atteint des taux de compression équivalents à ceux de JPEG 2000, tout en économisant 76% du temps de codage et de décodage<br>This doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding
APA, Harvard, Vancouver, ISO, and other styles
6

Had, Filip. "Komprese signálů EKG nasnímaných pomocí mobilního zařízení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316832.

Full text
Abstract:
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.
APA, Harvard, Vancouver, ISO, and other styles
7

Lúdik, Michal. "Porovnání hlasových a audio kodeků." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219793.

Full text
Abstract:
This thesis deals with description of human hearing, audio and speech codecs, description of objective measure of quality and practical comparison of codecs. Chapter about audio codecs consists of description of lossless codec FLAC and lossy codecs MP3 and Ogg Vorbis. In chapter about speech codecs is description of linear predictive coding and G.729 and OPUS codecs. Evaluation of quality consists of description of segmental signal-to- noise ratio and perceptual evaluation of quality – WSS and PESQ. Last chapter deals with description od practical part of this thesis, that is comparison of memory and time consumption of audio codecs and perceptual evaluation of speech codecs quality.
APA, Harvard, Vancouver, ISO, and other styles
8

Kasaei, Shohreh. "Fingerprint analysis using wavelet transform with application to compression and feature extraction." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36053/7/36053_Digitised_Thesis.pdf.

Full text
Abstract:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
APA, Harvard, Vancouver, ISO, and other styles
9

Zheng, L. "Lossy index compression." Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1302556/.

Full text
Abstract:
This thesis primarily investigates lossy compression of an inverted index. Two approaches of lossy compression are studied in detail, i.e. (i) term frequency quantization, and (ii) document pruning. In addition, a technique for document pruning, i.e. the entropy-based method, is applied to re-rank retrieved documents as query-independent knowledge. Based on the quantization theory, we examine how the number of quantization levels for coding the term frequencies affects retrieval performance. Three methods are then proposed for the purpose of reducing the quantization distortion, including (i) a non-uniform quantizer; (ii) an iterative technique; and (iii) term-specific quantizers. Experiments based on standard TREC test sets demonstrate that nearly no degradation of retrieval performance can be achieved by allocating only 2 or 3 bits for the quantized term frequency values. This is comparable to lossless coding techniques such as unary, γ and δ-codes. Furthermore, if lossless coding is applied to the quantized term frequency values, then around 26% (or 12%) savings can be achieved over lossless coding alone, with less than 2.5% (or no measurable) degradation in retrieval performance. Prior work on index pruning considered posting pruning and term pruning. In this thesis, an alternative pruning approach, i.e. document pruning, is investigated, in which unimportant documents are removed from the document collection. Four algorithms for scoring document importance are described, two of which are dependent on the score function of the retrieval system, while the other two are independent of the retrieval system. Experimental results suggest that document pruning is comparable to existing pruning approaches, such as posting pruning. Note that document pruning affects the global statistics of the indexed collection. We therefore examine whether retrieval performance is superior based on statistics derived from the full or the pruned collection. Our results indicate that keeping statistics derived from the full collection performs slightly better. Document pruning scores documents and then discards those that fall outside a threshold. An alternative is to re-rank documents based on these scores. The entropy-based score, which is independent of the retrieval system, provides a query-independent knowledge of document specificity, analogous to PageRank. We investigate the utility of document specificity in the context of Intranet search, where hypertext information is sparse or absent. Our results are comparable to the previous algorithm that induced a graph link structure based on the measure of similarity between documents. However, a further analysis indicates that our method is superior on computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
10

Hansson, Erik, and Stefan Karlsson. "Lossless Message Compression." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-21434.

Full text
Abstract:
In this thesis we investigated whether using compression when sending inter-process communication (IPC) messages can be beneficial or not. A literature study on lossless compression resulted in a compilation of algorithms and techniques. Using this compilation, the algorithms LZO, LZFX, LZW, LZMA, bzip2 and LZ4 were selected to be integrated into LINX as an extra layer to support lossless message compression. The testing involved sending messages with real telecom data between two nodes on a dedicated network, with different network configurations and message sizes. To calculate the effective throughput for each algorithm, the round-trip time was measured. We concluded that the fastest algorithms, i.e. LZ4, LZO and LZFX, were most efficient in our tests.<br>I detta examensarbete har vi undersökt huruvida komprimering av meddelanden för interprocesskommunikation (IPC) kan vara fördelaktigt. En litteraturstudie om förlustfri komprimering resulterade i en sammanställning av algoritmer och tekniker. Från den här sammanställningen utsågs algoritmerna LZO, LZFX, LZW, LZMA, bzip2 och LZ4 för integrering i LINX som ett extra lager för att stödja komprimering av meddelanden. Algoritmerna testades genom att skicka meddelanden innehållande riktig telekom-data mellan två noder på ett dedikerat nätverk. Detta gjordes med olika nätverksinställningar samt storlekar på meddelandena. Den effektiva nätverksgenomströmningen räknades ut för varje algoritm genom att mäta omloppstiden. Resultatet visade att de snabbaste algoritmerna, alltså LZ4, LZO och LZFX, var effektivast i våra tester.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Lossless and Lossy compression"

1

Abbott, Walter D. A simple, low overhead data compression algorithm for converting lossy processes to lossless. Naval Postgraduate School, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shukla, K. K., and M. V. Prasad. Lossy Image Compression. Springer London, 2011. http://dx.doi.org/10.1007/978-1-4471-2218-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khalid, Sayood, ed. Lossless compression handbook. Academic Press, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mielikaïnen, Jarno. Lossless compression of hyperspectral images. Lappeenranta University of Technology, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Craig, Michelle. Lossless image compression using connectionist network. University of Toronto, Dept. of Computer Science, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Craig, Michelle Wahl. Lossless image compression using connectionist networks. National Library of Canada, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reid, Mark Montgomery. Path-dictated, lossless volumetic data compression. The Author], 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

V, Prasad M., ed. Lossy image compression: Domain decomposition-based algorithms. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nicholl, Peter Nigel. Feature directed spiral image compression (a new technique for lossless image compression). The Author], 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

A, Schowengerdt Robert, and Research Institute for Advanced Computer Science (U.S.), eds. The effect of lossy image compression on image classification. Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Lossless and Lossy compression"

1

Ng, Wee Keong, Sunghyun Choi, and Chinya Ravishankar. "Lossless and Lossy Data Compression." In Evolutionary Algorithms in Engineering Applications. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-662-03423-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Narayan, Ajai, and Tenkasi V. Ramabadran. "Hybrid Lossless-Lossy Compression of Industrial Radiographs." In Review of Progress in Quantitative Nondestructive Evaluation. Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3344-3_97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Jiaji, Lei Wang, Yong Fang, and L. C. Jiao. "Multiplierless Reversible Integer TDLT/KLT for Lossy-to-Lossless Hyperspectral Image Compression." In Satellite Data Compression. Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-1183-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Apostolico, A., M. Comin, and L. Parida. "Bridging Lossy and Lossless Compression by Motif Pattern Discovery." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11889342_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Yunpeng, Stephan Beck, Renfang Wang, et al. "Hybrid Lossless-Lossy Compression for Real-Time Depth-Sensor Streams in 3D Telepresence Applications." In Lecture Notes in Computer Science. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24075-6_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Li-bao, and Ke Wang. "Efficient Lossy to Lossless Medical Image Compression Using Integer Wavelet Transform and Multiple Subband Decomposition." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28626-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hou, Ying, and Ying Li. "Hyperspectral Image Lossy-to-Lossless Compression Using 3D SPEZBC Algorithm Based on KLT and Wavelet Transform." In Intelligent Science and Intelligent Data Engineering. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31919-8_91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, S. N., Ajay Kumar Haridhas, A. Lenin Fred, and P. Sebastin Varghese. "Analysis of Lossy and Lossless Compression Algorithms for Computed Tomography Medical Images Based on Bat and Simulated Annealing Optimization Techniques." In Computational Intelligence Methods for Super-Resolution in Image Processing Applications. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67921-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ashbourn, Julian. "Lossless Compression." In Audio Technology, Music, and Media. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62429-3_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shekhar, Shashi, and Hui Xiong. "Lossy Image Compression." In Encyclopedia of GIS. Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-35973-1_729.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Lossless and Lossy compression"

1

Yang, Yizhe, Carson D. Sisk, and Jon C. Calhoun. "Evaluating Lossy and Lossless Compression for DICOM Medical Files." In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beemkumar, N., Sonia Riyat, and Deepak Kumar. "An Investigation of Lossless and Lossy Data Compression & Source Coding." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10725458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Song, Tiexu, Xiaonian Wang, and Ruizhi Sha. "A fusion lossy and lossless data compression method based on signal attributes." In 2024 7th International Conference on Data Science and Information Technology (DSIT). IEEE, 2024. https://doi.org/10.1109/dsit61374.2024.10881307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Villani, Federico, Sevrin Mathys, Çağla Özsoy, et al. "FPGA-Accelerated Hybrid Lossless and Lossy Compression for Next-Generation Portable Optoacoustic Platforms." In 2024 IEEE Ultrasonics, Ferroelectrics, and Frequency Control Joint Symposium (UFFC-JS). IEEE, 2024. https://doi.org/10.1109/uffc-js60046.2024.10794035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mentzer, Fabian, Luc Van Gool, and Michael Tschannen. "Learning Better Lossless Compression Using Lossy Compression." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Martins, Bo, and Soren Forchhammer. "Lossless/lossy compression of bilevel images." In Electronic Imaging '97, edited by Giordano B. Beretta and Reiner Eschbach. SPIE, 1997. http://dx.doi.org/10.1117/12.271615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shinoda, Kazuma, Hisakazu Kikuchi, and Shogo Muramatsu. "A Lossless-by-Lossy Approach to Lossless Image Compression." In 2006 International Conference on Image Processing. IEEE, 2006. http://dx.doi.org/10.1109/icip.2006.312814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martins, B., and S. Forchhammer. "Lossy/lossless coding of bi-level images." In Proceedings DCC '97. Data Compression Conference. IEEE, 1997. http://dx.doi.org/10.1109/dcc.1997.582116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bassiouni, M. A., A. P. Tzannes, M. C. Tzannes, and N. S. Tzannes. "Image compression using integrated lossless/lossy methods." In [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1991. http://dx.doi.org/10.1109/icassp.1991.150988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yea, Sehoon, Sungdae Cho, and William A. Pearlman. "Integrated lossy, near-lossless, and lossless compression of medical volumetric data." In Electronic Imaging 2005, edited by Amir Said and John G. Apostolopoulos. SPIE, 2005. http://dx.doi.org/10.1117/12.585931.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Lossless and Lossy compression"

1

Vitter, Jeffrey S. Design and Analysis of Lossless and Lossy Data Compression Methods with Applications to Communication and Caching. Defense Technical Information Center, 1994. http://dx.doi.org/10.21236/ada295025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vitter, Jeffrey S. Design and Analysis of Lossless and Lossy Data Compression Methods and Applications to Communication and Caching - Final Report. Defense Technical Information Center, 1997. http://dx.doi.org/10.21236/ada329667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stearns, S. D. Lossless compression of instrumentation data. Final report. Office of Scientific and Technical Information (OSTI), 1995. http://dx.doi.org/10.2172/150951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Choi, Junho, and Mitchell R. Grunes. Lossless Data Compression of Packet Data Streams,. Defense Technical Information Center, 1996. http://dx.doi.org/10.21236/ada304792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Su, J. K., M. K. Griffin, S. M. Hsu, S. Orloff, and C. A. Upham. Effects of Lossy Compression of Hyperspectral Imagery. Defense Technical Information Center, 2004. http://dx.doi.org/10.21236/ada429591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Grosset, Andre Vincent Pascal, and Sian Jin. Lossy Compression for Cosmology Data on GPU. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1558029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Phoha, Shashi, and Mendel Schmiedekamp. Semantic Source Coding for Flexible Lossy Image Compression. Defense Technical Information Center, 2007. http://dx.doi.org/10.21236/ada464658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shore, Robert A., and Arthur D. Yaghjian. Complex Waves on 1D, 2D, and 3D Periodic Arrays of Lossy and Lossless Magnetodielectric Spheres. Defense Technical Information Center, 2010. http://dx.doi.org/10.21236/ada534784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pulido, Jesus J., Zarija Lukic, Paul Thorman, Caixia Zheng, James Paul Ahrens, and Bernd Hamann. Data Reduction Using Lossy Compression for Cosmology and Astrophysics Workflows. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1467236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Orandi, Shahram, John M. Libert, John D. Grantham, et al. An Exploration of the Operational Ramifications of Lossless Compression of 1000 ppi Fingerprint Imagery. National Institute of Standards and Technology, 2012. http://dx.doi.org/10.6028/nist.ir.7779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography