To see the other types of publications on this topic, follow the link: Lossless and Lossy compression.

Journal articles on the topic 'Lossless and Lossy compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Lossless and Lossy compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yu, Rongshan, and Wenxian Yang. "ScaleQC: a scalable lossy to lossless solution for NGS data compression." Bioinformatics 36, no. 17 (2020): 4551–59. http://dx.doi.org/10.1093/bioinformatics/btaa543.

Full text
Abstract:
Abstract Motivation Per-base quality values in Next Generation Sequencing data take a significant portion of storage even after compression. Lossy compression technologies could further reduce the space used by quality values. However, in many applications, lossless compression is still desired. Hence, sequencing data in multiple file formats have to be prepared for different applications. Results We developed a scalable lossy to lossless compression solution for quality values named ScaleQC (Scalable Quality value Compression). ScaleQC is able to provide the so-called bit-stream level scalability that the losslessly compressed bit-stream by ScaleQC can be further truncated to lower data rates without incurring an expensive transcoding operation. Despite its scalability, ScaleQC still achieves comparable compression performance at both lossless and lossy data rates compared to the existing lossless or lossy compressors. Availability and implementation ScaleQC has been integrated with SAMtools as a special quality value encoding mode for CRAM. Its source codes can be obtained from our integrated SAMtools (https://github.com/xmuyulab/samtools) with dependency on integrated HTSlib (https://github.com/xmuyulab/htslib). Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
2

Eldstål-Ahrens, Albin, Angelos Arelakis, and Ioannis Sourdis. "L 2 C: Combining Lossy and Lossless Compression on Memory and I/O." ACM Transactions on Embedded Computing Systems 21, no. 1 (2022): 1–27. http://dx.doi.org/10.1145/3481641.

Full text
Abstract:
In this article, we introduce L 2 C, a hybrid lossy/lossless compression scheme applicable both to the memory subsystem and I/O traffic of a processor chip. L 2 C employs general-purpose lossless compression and combines it with state-of-the-art lossy compression to achieve compression ratios up to 16:1 and to improve the utilization of chip’s bandwidth resources. Compressing memory traffic yields lower memory access time, improving system performance, and energy efficiency. Compressing I/O traffic offers several benefits for resource-constrained systems, including more efficient storage and networking. We evaluate L 2 C as a memory compressor in simulation with a set of approximation-tolerant applications. L 2 C improves baseline execution time by an average of 50% and total system energy consumption by 16%. Compared to the lossy and lossless current state-of-the-art memory compression approaches, L 2 C improves execution time by 9% and 26%, respectively, and reduces system energy costs by 3% and 5%, respectively. I/O compression efficacy is evaluated using a set of real-life datasets. L 2 C achieves compression ratios of up to 10.4:1 for a single dataset and on average about 4:1, while introducing no more than 0.4% error.
APA, Harvard, Vancouver, ISO, and other styles
3

Magar, Satyawati, and Bhavani Sridharan. "Comparative analysis of various Image compression techniques for Quasi Fractal lossless compression." International Journal of Computer Communication and Informatics 2, no. 2 (2020): 30–45. http://dx.doi.org/10.34256/ijcci2024.

Full text
Abstract:
The most important Entity to be considered in Image Compression methods are Paek to signal noise ratio and Compression ratio. These two parameters are considered to judge the quality of any Image.and they a play vital role in any Image processing applications. Biomedical domain is one of the critical areas where more image datasets are involved for analysis and biomedical image compression is very, much essential. Basically, compression techniques are classified into lossless and lossy. As the name indicates, in the lossless technique the image is compressed without any loss of data. But in the lossy, some information may loss. Here both lossy & lossless techniques for an image compression are used. In this research different compression approaches of these two categories are discussed and brain images for compression techniques are highlighted. Both lossy and lossless techniques are implemented by studying it’s advantages and disadvantages. For this research two important quality parameters i.e. CR & PSNR are calculated. Here existing techniques DCT, DFT, DWT & Fractal are implemented and introduced new techniques i.e Oscillation Concept method, BTC-SPIHT & Hybrid technique using adaptive threshold & Quasi Fractal Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (2023): 36–42. http://dx.doi.org/10.46610/joits.2022.v09i01.005.

Full text
Abstract:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
APA, Harvard, Vancouver, ISO, and other styles
5

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (2023): 36–42. http://dx.doi.org/10.46610/joits.2023.v09i01.005.

Full text
Abstract:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
APA, Harvard, Vancouver, ISO, and other styles
6

Hayati, Anis Kamilah, and Haris Suka Dyatmika. "THE EFFECT OF JPEG2000 COMPRESSION ON REMOTE SENSING DATA OF DIFFERENT SPATIAL RESOLUTIONS." International Journal of Remote Sensing and Earth Sciences (IJReSES) 14, no. 2 (2018): 111. http://dx.doi.org/10.30536/j.ijreses.2017.v14.a2724.

Full text
Abstract:
The huge size of remote sensing data implies the information technology infrastructure to store, manage, deliver and process the data itself. To compensate these disadvantages, compressing technique is a possible solution. JPEG2000 compression provide lossless and lossy compression with scalability for lossy compression. As the ratio of lossy compression getshigher, the size of the file reduced but the information loss increased. This paper tries to investigate the JPEG2000 compression effect on remote sensing data of different spatial resolution. Three set of data (Landsat 8, SPOT 6 and Pleiades) processed with five different level of JPEG2000 compression. Each set of data then cropped at a certain area and analyzed using unsupervised classification. To estimate the accuracy, this paper utilized the Mean Square Error (MSE) and the Kappa coefficient agreement. The study shows that compressed scenes using lossless compression have no difference with uncompressed scenes. Furthermore, compressed scenes using lossy compression with the compression ratioless than 1:10 have no significant difference with uncompressed data with Kappa coefficient higher than 0.8.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaur, Harjit. "Image Compression Techniques with LZW method." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (2022): 1773–77. http://dx.doi.org/10.22214/ijraset.2022.39999.

Full text
Abstract:
Abstract: Image compression is a technique which is used to reduce the size of the data. In other words, it means to remove the extra data from the available by applying some techniques and tricks which makes the data easy for storing and transmitting it over the transmission medium. The compression techniques are broadly divided into two categories. First one is Lossy Compression in which some of the data is lost while compressing it and second technique is lossless technique in which data is not lost after compressing it. These compression techniques can be applied on different image formats. This review paper compares the different compression techniques. Keywords: lossy, lossless, image formats, compression techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

James, Ian. "Webwaves: Lossless vs lossy compression." Preview 2020, no. 205 (2020): 42. http://dx.doi.org/10.1080/14432471.2020.1751792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Kangli, and Wei Gao. "UniPCGC: Towards Practical Point Cloud Geometry Compression via an Efficient Unified Approach." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 12721–29. https://doi.org/10.1609/aaai.v39i12.33387.

Full text
Abstract:
Learning-based point cloud compression methods have made significant progress in terms of performance. However, these methods still encounter challenges including high complexity, limited compression modes, and a lack of support for variable rate, which restrict the practical application of these methods. In order to promote the development of practical point cloud compression, we propose an efficient unified point cloud geometry compression framework, dubbed as UniPCGC. It is a lightweight framework that supports lossy compression, lossless compression, variable rate and variable complexity. First, we introduce the Uneven 8-Stage Lossless Coder (UELC) in the lossless mode, which allocates more computational complexity to groups with higher coding difficulty, and merges groups with lower coding difficulty. Second, Variable Rate and Complexity Module (VRCM) is achieved in the lossy mode through joint adoption of a rate modulation module and dynamic sparse convolution. Finally, through the dynamic combination of UELC and VRCM, we achieve lossy compression, lossless compression, variable rate and complexity within a unified framework. Compared to the previous state-of-the-art method, our method achieves a compression ratio (CR) gain of 8.1% on lossless compression, and a Bjontegaard Delta Rate (BD-Rate) gain of 14.02% on lossy compression, while also supporting variable rate and variable complexity.
APA, Harvard, Vancouver, ISO, and other styles
10

Gunawan, Teddy Surya, Muhammad Khalif Mat Zain, Fathiah Abdul Muin, and Mira Kartiwi. "Investigation of Lossless Audio Compression using IEEE 1857.2 Advanced Audio Coding." Indonesian Journal of Electrical Engineering and Computer Science 6, no. 2 (2017): 422. http://dx.doi.org/10.11591/ijeecs.v6.i2.pp422-430.

Full text
Abstract:
<p>Audio compression is a method of reducing the space demand and aid transmission of the source file which then can be categorized by lossy and lossless compression. Lossless audio compression was considered to be a luxury previously due to the limited storage space. However, as storage technology progresses, lossless audio files can be seen as the only plausible choice for those seeking the ultimate audio quality experience. There are a lot of commonly used lossless codecs are FLAC, Wavpack, ALAC, Monkey Audio, True Audio, etc. The IEEE Standard for Advanced Audio Coding (IEEE 1857.2) is a new standard approved by IEEE in 2013 that covers both lossy and lossless audio compression tools. A lot of research has been done on this standard, but this paper will focus more on whether the IEEE 1857.2 lossless audio codec to be a viable alternative to other existing codecs in its current state. Therefore, the objective of this paper is to investigate the codec’s operation as initial measurements performed by researchers show that the lossless compression performance of the IEEE compressor is better than any traditional encoders, while the encoding speed is slower which can be further optimized.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Salman, Ghalib Ahmed, Ahmed Ahmed, and HAREER MOAIAD HUSSEN. "Medical Image Compression Utilizing The Serial Differences and ‎Coding Techniques." InfoTech Spectrum: Iraqi Journal of Data Science 2, no. 1 (2025): 26–36. https://doi.org/10.51173/ijds.v2i1.13.

Full text
Abstract:
Different medical devices for imaging used by centers and clinics produce an increasing number of sequential medical images. ‎Different imaging techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Fluoroscopy ‎produce a set of series for the same patient. Within these images, most of the image parts are fixed against noticeable changes ‎in the remaining part. This consumes non-ignorable storage space. This paper proposes a near-lossless compression ‎technique that considers the fixed image parts to focus on changing parts for a higher compression ratio. In some applications, lossless compression techniques are highly preferable against preferring lossy techniques in some applications. In other applications, near-lossless compression techniques are preferable to lossless and lossy compression techniques, where lossy ones may ‎lose significant details, and the lossless ones produce less compression ratios than near-lossless ones. Previous works dealt with Fluoroscopy images as individual images or using ‎video compression techniques. This work tends to handle the whole series of ‎images as an integrated object. This paper considers subtracting successive ‎images to detect ROI areas producing zero overall values over similar ‎areas and non-zero ones within ROI ones. The double coding technique and near-lossless concept of compression increase the compression ratio. ‎Conducted experiments showed encouraging results benchmarking the other published ‎works in medical image compression.‎
APA, Harvard, Vancouver, ISO, and other styles
12

Yenewondim, Biadgie. "Near-lossless image compression using an improved edge adaptive hierarchical interpolation." Indonesian Journal of Electrical Engineering and Computer Science 20, no. 3 (2020): 1576–83. https://doi.org/10.11591/ijeecs.v20.i3.pp1576-1583.

Full text
Abstract:
Lossy image compression of medical images is required to store efficiently a huge amount of medical data on a remote storage device and to reduce transmission time of the image across a low-bandwidth communication. On the other hand, lossless compression of medical images is recommended because the loss of minor information leads to wrong medical diagnosis results that affects the life of patinets. To compromise the conflicting requirements of lossy and lossless image compression methods, a near-lossless image compression method is proposed.In the previous work, an edge adaptive hierarchical interpolation (EAHINT) algorithm was proposed for progressive lossless image compression. In this paper, EAHINT algorithm was enhanced for scalable near-lossless image compression. The proposed interpolation algorithm has three linear components, namely, one-directional, multidirectional and non-directional linear interpolators. The EAHINT algorithm swiches adaptively among the three linear interpolators based on the strength of the edge in a local context of the current pixel being predicted. The strength of the edge in local window was estimated using the variance of the pixels in the local window. Although the actual predictors are still linear functions, the switching mechanism tried to deal with non-linear structures like edges. Simulation results demonstrate that the improved interpolation algorithm has better compression ratio over the original EAHINT algorithm and JPEG-Ls image compression standard.
APA, Harvard, Vancouver, ISO, and other styles
13

Hussain, S. K., and G. Raja. "A JPEG 2000 BASED HYBRID IMAGE COMPRESSION TECHNIQUE FOR MEDICAL IMAGES." Nucleus 48, no. 4 (2011): 287–93. https://doi.org/10.71330/thenucleus.2011.822.

Full text
Abstract:
Use of lossy compression for medical images could result in compression error that may be considered as diagnostic problem by medical doctor. Hybrid schemes, a combination of lossy and lossless compression are used to achieve higher compression ratio without compromising the subjective quality of medical images. This paper proposes a new hybrid compression method for medical images. Different combinations of lossy and lossless compression schemes: RLE, LZW, JPEG LS, JPEG and JPEG2000 are implemented to find out the best hybrid compression combination by keeping subjective quality of medical image as a benchmark. X-ray images are used for experimentation. Experimental results show that hybrid combination of JPEG2000 lossless and lossy JPEG2000 produce optimized results without compromising subject quality of medical images required for diagnostics. The proposed hybrid combination has average compression ratio, space saving, MSE and PSNR of 0.21, 78.97, 1.16 and 47.58 respectively for all the medical images used in experimentation. The proposed hybrid scheme can be used for medical image compression.
APA, Harvard, Vancouver, ISO, and other styles
14

Emy, Setyaningsih, and Harjoko Agus. "Survey of Hybrid Image Compression Techniques." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 4 (2017): 2206–14. https://doi.org/10.11591/ijece.v7i4.pp2206-2214.

Full text
Abstract:
A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a highquality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.
APA, Harvard, Vancouver, ISO, and other styles
15

SHINODA, K., H. KIKUCHI, and S. MURAMATSU. "Lossless-by-Lossy Coding for Scalable Lossless Image Compression." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E91-A, no. 11 (2008): 3356–64. http://dx.doi.org/10.1093/ietfec/e91-a.11.3356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Al-Khafaji, Ghadah, and Maha A. Rajab. "Lossless and Lossy Polynomial Image Compression." IOSR Journal of Computer Engineering 18, no. 04 (2016): 56–62. http://dx.doi.org/10.9790/0661-1804025662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Raja, S., and C. Suresh Gnana Dhas. "A Novel Method for Compression of Pressure Ulcer Image using Region of Interest Method." Asian Journal of Computer Science and Technology 6, no. 1 (2017): 15–20. http://dx.doi.org/10.51983/ajcst-2017.6.1.1777.

Full text
Abstract:
Other than lossy and lossless compression methods, the third option in pressure ulcer image compression could be the hybrid approach. Hybrid approach combines both lossy and lossless compression scheme. Here diagnostically important regions (Region of Interests) are lossless coded and the non regions of interests are lossy coded. Such hybrid approaches are referred as regionally lossless coding. Many researchers have proposed various hybrid medical mage compression techniques. In these techniques, region of interests are first segmented and a suitable coding is done for ROI and non ROI. By doing so, high compression ratios can be obtained and quality of diagnostically important regions is maintained high, as desired by clinicians. These hybrid approaches differ in accordance with the egmentation goal, segmentation approach they follow and coding techniques. There is no segmentation algorithm that is suitable for all pressure ulcer images. Most of the segmentation algorithms are for specific kind of pressure ulcer images. In this paper we propose a segmentation algorithm for extracting ROI from pressure ulcer images with hemorrhage, to supplement it to compression algorithm.
APA, Harvard, Vancouver, ISO, and other styles
18

Khaitu, Shree Ram, and Sanjeeb Prasad Panday. "Fractal Image Compression Using Canonical Huffman Coding." Journal of the Institute of Engineering 15, no. 1 (2020): 91–105. http://dx.doi.org/10.3126/jie.v15i1.27718.

Full text
Abstract:
Image Compression techniques have become a very important subject with the rapid growth of multimedia application. The main motivations behind the image compression are for the efficient and lossless transmission as well as for storage of digital data. Image Compression techniques are of two types; Lossless and Lossy compression techniques. Lossy compression techniques are applied for the natural images as minor loss of the data are acceptable. Entropy encoding is the lossless compression scheme that is independent with particular features of the media as it has its own unique codes and symbols. Huffman coding is an entropy coding approach for efficient transmission of data. This paper highlights the fractal image compression method based on the fractal features and searching and finding the best replacement blocks for the original image. Canonical Huffman coding which provides good fractal compression than arithmetic coding is used in this paper. The result obtained depicts that Canonical Huffman coding based fractal compression technique increases the speed of the compression and has better PNSR as well as better compression ratio than standard Huffman coding.
APA, Harvard, Vancouver, ISO, and other styles
19

David S, Alex, Almas Begum, and Ravikumar S. "Content clustering for MRI Image compression using PPAM." International Journal of Engineering & Technology 7, no. 1.7 (2018): 126. http://dx.doi.org/10.14419/ijet.v7i1.7.10631.

Full text
Abstract:
Image compression helps to save the utilization of memory, data while transferring the images between nodes. Compression is one of the key technique in medical image. Both lossy and lossless compressions where used based on the application. In case of medical imaging each and every components of pixel is very important hence its nature to chose lossless compression medical images. MRI images are compressed after processing. Here in this paper we have used PPMA method to compress the MRI image. For retrieval of the compressed image content clustering method used.
APA, Harvard, Vancouver, ISO, and other styles
20

Brysina, Iryna Victorivna, and Victor Olexandrovych Makarichev. "DISCRETE ATOMIC COMPRESSION OF DIGITAL IMAGES: ALMOST LOSSLESS COMPRESSION." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 1 (March 23, 2019): 29–36. http://dx.doi.org/10.32620/reks.2019.1.03.

Full text
Abstract:
In this paper, we consider the problem of digital image compression with high requirements to the quality of the result. Obviously, lossless compression algorithms can be applied. Since lossy compression provides a higher compression ratio and, hence, higher memory savings than lossless compression, we propose to use lossy algorithms with settings that provide the smallest loss of quality. The subject matter of this paper is almost lossless compression of full color 24-bit digital images using the discrete atomic compression (DAC) that is an algorithm based on the discrete atomic transform. The goal is to investigate the compression ratio and the quality loss indicators such as uniform (U), root mean square (RMS) and peak signal to noise ratio (PSNR) metrics. We also study the distribution of the difference between pixels of the original image and the corresponding pixels of the reconstructed image. In this research, the classic test images and the classic aerial images are considered. U-metric, which is highly dependent on even minor local changes, is considered as the major metric of quality loss. We solve the following tasks: to evaluate memory savings and loss of quality for each test image. We use the methods of digital image processing, atomic function theory, and approximation theory. The computer program "Discrete Atomic Compression: User Kit" with the mode "Almost Lossless Compression" is used to obtain results of the DAC processing of test images. We obtain the following results: 1) the difference between the smallest and the largest loss of quality is minor; 2) loss of quality is quite stable and predictable; 3) the compression ratio depends on the smoothness of the color change (the smallest and the largest values are obtained when processing the test images with the largest and the smallest number of small details in the image, respectively); 4) DAC provides 59 percent of memory savings; 5) ZIP-compression of DAC-files, which contain images compressed by DAC, is efficient. Conclusions: 1) the almost lossless compression mode of DAC provides sufficiently stable values of the considered quality loss metrics; 2) DAC provides relatively high compression ratio; 3) there is a possibility of further optimization of the DAC algorithm; 4) further research and development of this algorithm are promising.
APA, Harvard, Vancouver, ISO, and other styles
21

Martí, Aniol, Jordi Portell, Jaume Riba, and Orestes Mas. "Context-Aware Lossless and Lossy Compression of Radio Frequency Signals." Sensors 23, no. 7 (2023): 3552. http://dx.doi.org/10.3390/s23073552.

Full text
Abstract:
We propose an algorithm based on linear prediction that can perform both the lossless and near-lossless compression of RF signals. The proposed algorithm is coupled with two signal detection methods to determine the presence of relevant signals and apply varying levels of loss as needed. The first method uses spectrum sensing techniques, while the second one takes advantage of the error computed in each iteration of the Levinson–Durbin algorithm. These algorithms have been integrated as a new pre-processing stage into FAPEC, a data compressor first designed for space missions. We test the lossless algorithm using two different datasets. The first one was obtained from OPS-SAT, an ESA CubeSat, while the second one was obtained using a SDRplay RSPdx in Barcelona, Spain. The results show that our approach achieves compression ratios that are 23% better than gzip (on average) and very similar to those of FLAC, but at higher speeds. We also assess the performance of our signal detectors using the second dataset. We show that high ratios can be achieved thanks to the lossy compression of the segments without any relevant signal.
APA, Harvard, Vancouver, ISO, and other styles
22

Basit, M. A., and G. Raja. "PERFORMANCE EVALUATION OF EMERGING JPEGXR COMPRESSION STANDARD FOR MEDICAL IMAGES." Nucleus 49, no. 1 (2012): 11–19. https://doi.org/10.71330/thenucleus.2012.810.

Full text
Abstract:
Medical images require lossless compression as a small error due to lossy compression may be considered as a diagnostic error. JPEG XR is the latest image compression standard designed for variety of applications and has a support for lossy and lossless modes. This paper provides in-depth performance evaluation of latest JPEGXR with existing image coding standards for medical images using lossless compression. Various medical images are used for evaluation and ten images of each organ are tested. Performance of JPEGXR is compared with JPEG2000 and JPEGLS using mean square error, peak signal to noise ratio, mean absolute error and structural similarity index. JPEGXR shows improvement of 20.73 dB and 5.98 dB over JPEGLS and JPEG2000 respectively for various test images used in experimentation.
APA, Harvard, Vancouver, ISO, and other styles
23

alZahir, Saif, and Syed M. Naqvi. "A Hybrid Lossless-Lossy Binary Image Compression Scheme." International Journal of Computer Vision and Image Processing 3, no. 4 (2013): 37–50. http://dx.doi.org/10.4018/ijcvip.2013100103.

Full text
Abstract:
In this paper, the authors present a binary image compression scheme that can be used either for lossless or lossy compression requirements. This scheme contains five new contributions. The lossless component of the scheme partitions the input image into a number of non-overlapping rectangles using a new line-by-line method. The upper-left and the lower-right vertices of each rectangle are identified and the coordinates of which are efficiently encoded using three methods of representation and compression. The lossy component, on the other hand, provides higher compression through two techniques. 1) It reduces the number of rectangles from the input image using our mathematical regression models. These mathematical models guarantees image quality so that rectangular reduction should not produce visual distortion in the image. The mathematical models have been obtained through subjective tests and regression analysis on a large set of binary images. 2) Further compression gain is achieved through discarding isolated pixels and 1-pixel rectangles from the image. Simulation results show that the proposed schemes provide significant improvements over previously published work for both the lossy and the lossless components.
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, Yijie, Jui-Chuan Liu, Ching-Chun Chang, and Chin-Chen Chang. "Lossless Recompression of Vector Quantization Index Table for Texture Images Based on Adaptive Huffman Coding Through Multi-Type Processing." Symmetry 16, no. 11 (2024): 1419. http://dx.doi.org/10.3390/sym16111419.

Full text
Abstract:
With the development of the information age, all walks of life are inseparable from the internet. Every day, huge amounts of data are transmitted and stored on the internet. Therefore, to improve transmission efficiency and reduce storage occupancy, compression technology is becoming increasingly important. Based on different application scenarios, it is divided into lossless data compression and lossy data compression, which allows a certain degree of compression. Vector quantization (VQ) is a widely used lossy compression technology. Building upon VQ compression technology, we propose a lossless compression scheme for the VQ index table. In other words, our work aims to recompress VQ compression technology and restore it to the VQ compression carrier without loss. It is worth noting that our method specifically targets texture images. By leveraging the spatial symmetry inherent in these images, our approach generates high-frequency symbols through difference calculations, which facilitates the use of adaptive Huffman coding for efficient compression. Experimental results show that our scheme has better compression performance than other schemes.
APA, Harvard, Vancouver, ISO, and other styles
25

Silver, Jeremy D., and Charles S. Zender. "The compression–error trade-off for large gridded data sets." Geoscientific Model Development 10, no. 1 (2017): 413–23. http://dx.doi.org/10.5194/gmd-10-413-2017.

Full text
Abstract:
Abstract. The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scale and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.
APA, Harvard, Vancouver, ISO, and other styles
26

PL. Chithra, Dr, and A. Christoper Tamilmathi. "Effective lossy and lossless color image compression with Multilayer Perceptron." International Journal of Engineering & Technology 7, no. 2.22 (2018): 9. http://dx.doi.org/10.14419/ijet.v7i2.22.11800.

Full text
Abstract:
This paper presents the effective lossy and lossless color image compression algorithm with Multilayer perceptron. The parallel structure of neural network and the concept of image compression combined to yield a better reconstructed image with constant bit rate and less computation complexity. Original color image component has been divided into 8x8 blocks. The discrete cosine transform (DCT) applied on each block for lossy compression or discrete wavelet transform (DWT) applied for lossless image compression. The output coefficient values have been normalized by using mod function. These normalized vectors have been passed to Multilayer Perceptron (MLP). This proposed method implements the Back propagation neural network (BPNN) which is suitable for compression process with less convergence time. Performance of the proposed compression work is evaluated based on three ways. First one compared the performance of lossy and lossless compression with BPNN. Second one, evaluated based on different sized hidden layers and proved that increased neurons in hidden layer has been preserved the brightness of an image. Third, the evaluation based on three different types of activation function and the result shows that each function has its own merit. Proposed algorithm has been competed with existing JPEG color compression algorithm based on PSNR measurement. Resultant value denotes that the proposed method well performed to produce the better reconstructed image with PSNR value approximately increased by 21.62%.
APA, Harvard, Vancouver, ISO, and other styles
27

Singh, Manjari, Sushil Kumar, Siddharth Singh, and Manish Shrivastava. "Various Image Compression Techniques: Lossy and Lossless." International Journal of Computer Applications 142, no. 6 (2016): 23–26. http://dx.doi.org/10.5120/ijca2016909829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Nian, Yongjian, Mi He, and Jianwei Wan. "Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding." Mathematical Problems in Engineering 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/825673.

Full text
Abstract:
A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC) is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles
29

Mishra, Amit Kumar. "Versatile Video Coding (VVC) Standard: Overview and Applications." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, no. 2 (2019): 975–81. http://dx.doi.org/10.17762/turcomat.v10i2.13578.

Full text
Abstract:
Information security includes picture and video compression and encryption since compressed data is more secure than uncompressed imagery. Another point is that handling data of smaller sizes is simple. Therefore, efficient, secure, and simple data transport methods are created through effective data compression technology. Consequently, there are two different sorts of compression algorithm techniques: lossy compressions and lossless compressions. Any type of data format, including text, audio, video, and picture files, may leverage these technologies. In this procedure, the Least Significant Bit technique is used to encrypt each frame of the video file format to be able to increase security. The primary goals of this procedure are to safeguard the data by encrypting the frames and compressing the video file. Using PSNR to enhance process throughput would also enhance data transmission security while reducing data loss.
APA, Harvard, Vancouver, ISO, and other styles
30

Sinshahw, Yenewondim Biadgie. "Near-lossless image compression using an improved edge adaptive hierarchical interpolation." Indonesian Journal of Electrical Engineering and Computer Science 20, no. 3 (2020): 1576. http://dx.doi.org/10.11591/ijeecs.v20.i3.pp1576-1583.

Full text
Abstract:
<span>In medical and scientific imaging, lossless image compression is recommended because the loss of minor details subject to medical diagnosis can lead to wrong diagniosis. On the other hand, lossy compression of medical images is required in the long run because a huge quantity of medical data needs remote storage. This, in turn, takes long time to search and transfer an image. Instead of thinking lossless or lossy image compression methods, near-loss image compression mehod can be used to compromise the two conflicting requirements. In the previous work, an edge adaptive hierarchical interpolation (EAHINT) was proposed for resolution scalable lossless compression of images. In this paper, it was enhanced for scalable near-less image compression. The interpolator of this arlgorithm swiches among one-directional, multi-directional and non-directional linear interpolators adaptively based on the strength of the edge in a 3x3 local casual context of the current pixel being predicted. The strength of the edge in local window was estimated using the variance of the the pixels in the local window. Although the actual predictors are still linear functions, the switching mechanism tried to deal with non-linear structures like edges. Simulation results demonstrate that the improved interpolation algorithm has better compression ratio over the the exsisting the original EAHINT algorithm and JPEG-Ls image compression standard. </span>
APA, Harvard, Vancouver, ISO, and other styles
31

Miss., Nilam V. Kundlik*1 &. Mr. Manoj Kumar Singh2. "OSCILLATION AND DCT BASED BIOMEDICAL IMAGE COMPRESSION." OSCILLATION AND DCT BASED BIOMEDICAL IMAGE COMPRESSION 7, no. 6 (2018): 353–59. https://doi.org/10.5281/zenodo.1290477.

Full text
Abstract:
Image compression technique for  biomedical  image  analysis has importance as because of time consuming process and also its storage .   It   becomes   necessary   to compress  the  image  to  reduce     processing   time   required   to retrieve  target  components from biomedical  images. Also biomedical images requires more space for storage and also management of these images is very difficult.  To compress the image, there are different techniques used. These techniques  are  classified  in  to  lossy  compression  and  lossless compression     techniques.     Out of     these     methods     lossy compression  schemes  are not widely used  due  to  possible  loss  of  useful clinical information and as operations like enhancement may lead to further degradations in the lossy  compression.   While lossless schemes avoids above mentioned drawbacks of lossy compression technique.  But  no  lossless  compression  algorithm can  efficiently compress  all  possible  data.  For  this  reason,  multiple  algorithms exist that are designed either with a specific type of input data in mind   or   with   specific    assumptions    about   what    kinds    of redundancy the uncompressed data are likely to contain.
APA, Harvard, Vancouver, ISO, and other styles
32

Teddy, Surya Gunawan, and Kartiwi Mira. "Performance Evaluation of Multichannel Audio Compression." Indonesian Journal of Electrical Engineering and Computer Science 10, no. 1 (2018): 146–53. https://doi.org/10.11591/ijeecs.v10.i1.pp146-153.

Full text
Abstract:
In recent years, multichannel audio systems are widely used in modern sound devices as it can provide more realistic and engaging experience to the listener. This paper focuses on the performance evaluation of three lossy, i.e. AAC, Ogg Vorbis, and Opus, and three lossless compression, i.e. FLAC, TrueAudio, and WavPack, for multichannel audio signals, including stereo, 5.1 and 7.1 channels. Experiments were conducted on the same three audio files but with different channel configurations. The performance of each encoder was evaluated based on its encoding time (averaged over 100 times), data reduction, and audio quality. Usually, there is always a trade-off between the three metrics. To simplify the evaluation, a new integrated performance metric was proposed that combines all the three performance metrics. Using the new measure, FLAC was found to be the best lossless compression, while Ogg Vorbis and Opus were found to be the best for lossy compression depends on the channel configuration. This result could be used in determining the proper audio format for multichannel audio systems.
APA, Harvard, Vancouver, ISO, and other styles
33

S., Shunmugan*1 &. P. Arockia Jansi Rani2. "AN EFFICIENT JOINT ENCRYPTION AND COMPRESSION USING HOP AND PERMUTATION." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 6, no. 7 (2017): 194–205. https://doi.org/10.5281/zenodo.823090.

Full text
Abstract:
A new coding initiative is introduced “Visually lossless” coding in place of numerically lossless coding to reduce the piling space and lower data transmission. The question for introducing this new coding method arises due to the increased image sizes. Here we speak about the lossy compression method on encrypted image, which compresses the image with minute quality losses that cannot be detected. This lossy compression method includes DCT with RLEcompression, Hierarchical Oriented Prediction (HOP), uniform quantization, orthogonal matrix generation, negative sign removal and Huffman compression. The encrypted image is divided as elastic part, which is compressed using Xingpeng Zhang method and rigid part for which HOP method is used for compression. This method is tested in different type and size of images and the results are obtained. The results reveal that this method is much better than the existing compression methods. The bit rate reduction ratio of this method is 10.45% and the naked eye perception is visually lossless.
APA, Harvard, Vancouver, ISO, and other styles
34

Setyaningsih, Emy, and Agus Harjoko. "Survey of Hybrid Image Compression Techniques." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 4 (2017): 2206. http://dx.doi.org/10.11591/ijece.v7i4.pp2206-2214.

Full text
Abstract:
A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.
APA, Harvard, Vancouver, ISO, and other styles
35

Maireles-González, Òscar, Joan Bartrina-Rapesta, Miguel Hernández-Cabronero, and Joan Serra-Sagristà. "Lossy Compression of Integer Astronomical Images Preserving Photometric Properties*." Publications of the Astronomical Society of the Pacific 136, no. 11 (2024): 114506. http://dx.doi.org/10.1088/1538-3873/ad8b69.

Full text
Abstract:
Abstract Observatories are producing astronomical image data at quickly increasing rates. As a result, the efficiency of the compression methods employed is critical to meet the storage and distribution requirements of both observatories and scientists. This paper presents a novel lossy compression technique that is able to preserve the results of photometry analysis with high fidelity while improving upon the state of the art in terms of compression performance. The proposed compression pipeline combines a flexible bi-region quantization scheme with the lossless, dictionary-based, LPAQ9M encoder. The quantization process allows compression performance and photometric fidelity to be precisely tailored to different scientific requirements. A representative data set of 16-bit integer astronomical images produced by telescopes from all around the world has been employed to empirically assess its compression-fidelity trade-offs, and compare them to those of the de facto standard Fpack compressor. In these experiments, the widespread SExtractor software is employed as the ground truth for photometric analysis. Results indicate that after lossy compression with our proposed method, the decompressed data allows consistent detection of over 99% of all astronomical objects for all tested telescopes, maintaining the highest photometric fidelity (as compared to state of the art lossy techniques). When compared to the best configuration of Fpack (Hcompress lossy using 1 quantization parameter) at similar compression rates, our proposed method provides better photometry precision: 7.15% more objects are detected with magnitude errors below 0.01, and 9.13% more objects with magnitudes below SExtractor’s estimated measurement error. Compared to the best lossless compression results, the proposed pipeline allows us to reduce the compressed data set volume by up to 38.75% and 27.94% while maintaining 90% and 95%, respectively, of the detected objects with magnitude differences lower than 0.01 mag; and up to 18.93% while maintaining 90% of the detected objects with magnitude differences lower than the photometric measure error.
APA, Harvard, Vancouver, ISO, and other styles
36

Cheng, X., and Z. Li. "HOW DOES SHANNON’S SOURCE CODING THEOREM FARE IN PREDICTION OF IMAGE COMPRESSION RATIO WITH CURRENT ALGORITHMS?" ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 22, 2020): 1313–19. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-1313-2020.

Full text
Abstract:
Abstract. Images with large volumes are generated daily with the advent of advanced sensors and platforms (e.g., satellite, unmanned autonomous vehicle) of data acquisition. This incurs issues on the storage, processing, and transmission of images. To address such issues, image compression is essential and can be achieved by lossy and/or lossless approaches. With lossy compression, a high compression ratio can usually be achieved but the original data can never be completely recovered. On the other hand, with lossless compression, the original information is well reserved. Lossless compression is very desirable in many applications such as remote sensing, geological surveying. Shannon's source coding theorem has defined the theoretical limits of compression ratio. However, some researchers have discovered that some compression techniques have achieved a compression ratio that is higher than the theoretical limits. Then, two questions naturally arise, i.e., “When this happens?” and “Why this happens?”. This study is dedicated to giving answers to these two questions. Six algorithms are used to compress 1650 images with different complexities. The experimental results show that the generally acknowledged Shannon’s coding theorem is still good enough for predicting compression ratio by the algorithms with consideration of statistical information only, but not capable of predicting compression ratio by the algorithms with consideration of configurational information of pixels. Overall, this study indicates that new empirical (or theoretical) models for predicting lossless compression ratio can be built with metrics capturing configurational information.
APA, Harvard, Vancouver, ISO, and other styles
37

Hung, Tran Dang, and Jan Platoš. "Mathematical Preliminaries in the Case of Lossless Compression Markov Models." Saudi Journal of Engineering and Technology 8, no. 05 (2023): 98–102. http://dx.doi.org/10.36348/sjet.2023.v08i05.003.

Full text
Abstract:
Compression schemes can be divided into two categories, lossy and lossless, but this paper presents lossless data compression models and the original data can be correctly recovered from the data compressed material. Some mathematical results are assumed; the results of probability tests are assumed and used to evaluate the compression techniques we will discuss. To learn more about math concepts for some of the topics in this article, see [2, 3]. First, we look at several ideas in information theory that provide a standard for the development of lossless data compression schemes are briefly reviewed. We next look at several ways to model data that lead to efficient data compression encryption schemes.
APA, Harvard, Vancouver, ISO, and other styles
38

Patel, Jigar. "Enhanced Encoding Technique for Lossless Image Compression." RESEARCH HUB International Multidisciplinary Research Journal 9, no. 1 (2022): 01–09. http://dx.doi.org/10.53573/rhimrj.2022.v09i01.001.

Full text
Abstract:
This paper proposes new encoding technique for lossless image compression. Here the pixel color information is stored in the file using efficient and sophisticated encoding technique which will be further compressed by existing compression algorithms and coding techniques. The final file with proposed extension generated by this scheme gives the better compression ratios than the existing image formats. The file is further decoded and image is recovered without any loss of information. The paper also discusses results and comparative analysis using popular image set compression ratio comparison of proposed scheme with existing image formats. That shows proposed encoding scheme will helps to save storage space where the lossy compression techniques are not recommended.
APA, Harvard, Vancouver, ISO, and other styles
39

Báscones, Daniel, Carlos González, and Daniel Mozos. "An FPGA Accelerator for Real-Time Lossy Compression of Hyperspectral Images." Remote Sensing 12, no. 16 (2020): 2563. http://dx.doi.org/10.3390/rs12162563.

Full text
Abstract:
Hyperspectral images offer great possibilities for remote studies, but can be difficult to manage due to their size. Compression helps with storage and transmission, and many efforts have been made towards standardizing compression algorithms, especially in the lossless and near-lossless domains. For long term storage, lossy compression is also of interest, but its complexity has kept it away from real-time performance. In this paper, JYPEC, a lossy hyperspectral compression algorithm that combines PCA and JPEG2000, is accelerated using an FPGA. A tier 1 coder (a key step and the most time-consuming in JPEG2000 compression) was implemented in a heavily pipelined fashion. Results showed a performance comparable to that of existing 0.18 μm CMOS implementations, all while keeping a small footprint on FPGA resources. This enabled the acceleration of the most complex step of JYPEC, bringing the total execution time below the real-time constraint.
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Ke, Bin Liu, Yongjian Nian, Mi He, and Jianwei Wan. "Distributed lossy compression for hyperspectral images based on multilevel coset codes." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 02 (2017): 1750012. http://dx.doi.org/10.1142/s0219691317500126.

Full text
Abstract:
This paper focuses on the problem of lossy compression for hyperspectral images and presents an efficient compression algorithm based on distributed source coding. The proposed algorithm employs a block-based quantizer followed by distributed lossless coding, which is implemented through the use of multilevel coset codes. First, a bitrate allocation algorithm is proposed to assign the rational bitrate for each block. Subsequently, the multilinear regression model is employed to construct the side information of each block, and the optimal quantization step size of each block is obtained under the assigned bitrate while minimizing the distortion. Finally, the quantized version of each block is encoded by distributed lossless compression. Experimental results show that the compression performance of the proposed algorithm is competitive with that of state-of-the-art transform-based compression algorithms. Moreover, the proposed algorithm provides both low encoder complexity and error resilience, making it suitable for onboard compression.
APA, Harvard, Vancouver, ISO, and other styles
41

Biagetti, Giorgio, Paolo Crippa, Laura Falaschetti, Ali Mansour, and Claudio Turchetti. "Energy and Performance Analysis of Lossless Compression Algorithms for Wireless EMG Sensors." Sensors 21, no. 15 (2021): 5160. http://dx.doi.org/10.3390/s21155160.

Full text
Abstract:
Electromyography (EMG) sensors produce a stream of data at rates that can easily saturate a low-energy wireless link such as Bluetooth Low Energy (BLE), especially if more than a few EMG channels are being transmitted simultaneously. Compressing data can thus be seen as a nice feature that could allow both longer battery life and more simultaneous channels at the same time. A lot of research has been done in lossy compression algorithms for EMG data, but being lossy, artifacts are inevitably introduced in the signal. Some artifacts can usually be tolerable for current applications. Nevertheless, for some research purposes and to enable future research on the collected data, that might need to exploit various and currently unforseen features that had been discarded by lossy algorithms, lossless compression of data may be very important, as it guarantees no extra artifacts are introduced on the digitized signal. The present paper aims at demonstrating the effectiveness of such approaches, investigating the performance of several algorithms and their implementation on a real EMG BLE wireless sensor node. It is demonstrated that the required bandwidth can be more than halved, even reduced to 1/4 on an average case, and if the complexity of the compressor is kept low, it also ensures significant power savings.
APA, Harvard, Vancouver, ISO, and other styles
42

Avinash, Gopal B. "Image compression and data integrity in confocal microscopy." Proceedings, annual meeting, Electron Microscopy Society of America 51 (August 1, 1993): 206–7. http://dx.doi.org/10.1017/s0424820100146874.

Full text
Abstract:
In confocal microscopy, one method of managing large data is to store the data in a compressed form using image compression algorithms. These algorithms can be either lossless or lossy. Lossless algorithms compress images without losing any information with modest compression ratios (memory for the original / memory for the compressed) which are usually between 1 and 2 for typical confocal 2-D images. However, lossy algorithms can provide higher compression ratios (3 to 8) at the expense of information content in the images. The main purpose of this study is to empirically demonstrate the use of lossy compression techniques to images obtained from a confocal microscope while retaining the qualitative and quantitative image integrity under certain criteria.A fluorescent pollen specimen was imaged using ODYSSEY, a real-time laser scanning confocal microscope from NORAN Instruments, Inc. The images (128 by 128) consisted of a single frame (scanned in 33ms), a 4-frame average, a 64-frame average and an edge-preserving smoothed image of the single frame.
APA, Harvard, Vancouver, ISO, and other styles
43

P., John Vivek, Elangovan K., Jayaram R., and Mohammed Javeeth B. "ROI BASED MEDICAL IMAGE EPITOMIZE USING SPECK AND AAC." International Journal of Computational Research and Development 1, no. 2 (2017): 26–29. https://doi.org/10.5281/zenodo.376806.

Full text
Abstract:
This paper proposes the ROI based medical image compression using SPECK and AAC for telemedicine applications. This is the hybrid image compression model for efficient transmission of medical image using lossless and lossy compression techniques. In this Dual tree complex wavelet with AAC will be used for lossless compression and for secondary region based wavelet transmission with SPECK coding for getting high compression ratio with less error rate. The SPECK method of energy efficient compression, in order to reduce the battery consumption during the transmission of images. Finally the performance of this hybrid compression method will be evaluated through parameters like MSE, PSNR and compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
44

Chikhaoui, Dalila, Mohammed Beladgham, and Mohamed Benaissa. "Convolutional auto-encoder and discrete wavelet transform for lossy region-based medical image compression." STUDIES IN ENGINEERING AND EXACT SCIENCES 5, no. 3 (2024): e12578. https://doi.org/10.54021/seesv5n3-040.

Full text
Abstract:
Medical imaging is a vital and ever-evolving discipline that plays a critical role in detecting, diagnosing, and planning surgeries for diseases. In medical imaging, image compression systems are employed to reduce storage and bandwidth requirements. This paper presents a region of interest (ROI) based image compression method for brain MRI images by employing a convolutional auto-encoder (CAE) and discrete wavelet transform (DWT) to achieve both efficient compression and high-quality reconstruction in terms of lossy compression. Our compression approach involves separating image regions, employing a lossy compression technique based on CAE and DWT, with a lossless compression technique based arithmetic encoding. To achieve optimal compression of non-ROI regions, the encoding network of the CAE generates a compacted feature map that preserves structural information, which aids the DWT based codec and the decoding network of the CAE in reconstructing the output. To maintain diagnostically significant information, the ROI portion of the image is losslessly compressed using arithmetic encoding technique. To assess the effectiveness of our proposed method, we compared our results to standard lossy compression algorithms and recent approaches. Our method achieved a peak signal-to-noise ratio (PSNR) of 37.91 dB and a mean structural similarity index measure (MS-SSIM) of 98.62% at a high compression ratio of 30.
APA, Harvard, Vancouver, ISO, and other styles
45

Moffat, Alistair, Timothy C. Bell, and Ian H. Witten. "Lossless Compression for Text and Images." International Journal of High Speed Electronics and Systems 08, no. 01 (1997): 179–231. http://dx.doi.org/10.1142/s0129156497000068.

Full text
Abstract:
Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as images—particularly bilevel ones, or ones arising in medical and remote-sensing applications, or ones that may be required to be certified true for legal reasons. Moreover, during the process of lossy compression, many occasions for lossless compression of coefficients or other information arise. This paper surveys techniques for lossless compression. The process of compression can be broken down into modeling and coding. We provide an extensive discussion of coding techniques, and then introduce methods of modeling that are appropriate for text and images. Standard methods used in popular utilities (in the case of text) and international standards (in the case of images) are described.
APA, Harvard, Vancouver, ISO, and other styles
46

No, Albert, Mikel Hernaez, and Idoia Ochoa. "CROMqs: An infinitesimal successive refinement lossy compressor for the quality scores." Journal of Bioinformatics and Computational Biology 18, no. 06 (2020): 2050031. http://dx.doi.org/10.1142/s0219720020500316.

Full text
Abstract:
The amount of sequencing data is growing at a fast pace due to a rapid revolution in sequencing technologies. Quality scores, which indicate the reliability of each of the called nucleotides, take a significant portion of the sequencing data. In addition, quality scores are more challenging to compress than nucleotides, and they are often noisy. Hence, a natural solution to further decrease the size of the sequencing data is to apply lossy compression to the quality scores. Lossy compression may result in a loss in precision, however, it has been shown that when operating at some specific rates, lossy compression can achieve performance on variant calling similar to that achieved with the losslessly compressed data (i.e. the original data). We propose Coding with Random Orthogonal Matrices for quality scores (CROMqs), the first lossy compressor designed for the quality scores with the “infinitesimal successive refinability” property. With this property, the encoder needs to compress the data only once, at a high rate, while the decoder can decompress it iteratively. The decoder can reconstruct the set of quality scores at each step with reduced distortion each time. This characteristic is specifically useful in sequencing data compression, since the encoder does not generally know what the most appropriate rate of compression is, e.g. for not degrading variant calling accuracy. CROMqs avoids the need of having to compress the data at multiple rates, hence incurring time savings. In addition to this property, we show that CROMqs obtains a comparable rate-distortion performance to the state-of-the-art lossy compressors. Moreover, we also show that it achieves a comparable performance on variant calling to that of the lossless compressed data while achieving more than 50% reduction in size.
APA, Harvard, Vancouver, ISO, and other styles
47

Scarmana, G. "Lossless data compression of grid-based digital elevation models: A png image format evaluation." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5 (May 28, 2014): 313–19. http://dx.doi.org/10.5194/isprsannals-ii-5-313-2014.

Full text
Abstract:
At present, computers, lasers, radars, planes and satellite technologies make possible very fast and accurate topographic data acquisition for the production of maps. However, the problem of managing and manipulating this data efficiently remains. One particular type of map is the elevation map. When stored on a computer, it is often referred to as a Digital Elevation Model (DEM). A DEM is usually a square matrix of elevations. It is like an image, except that it contains a single channel of information (that is, elevation) and can be compressed in a lossy or lossless manner by way of existing image compression protocols. Compression has the effect of reducing memory requirements and speed of transmission over digital links, while maintaining the integrity of data as required. <br><br> In this context, this paper investigates the effects of the PNG (Portable Network Graphics) lossless image compression protocol on floating-point elevation values for 16-bit DEMs of dissimilar terrain characteristics. The PNG is a robust, universally supported, extensible, lossless, general-purpose and patent-free image format. Tests demonstrate that the compression ratios and run decompression times achieved with the PNG lossless compression protocol can be comparable to, or better than, proprietary lossless JPEG variants, other image formats and available lossless compression algorithms.
APA, Harvard, Vancouver, ISO, and other styles
48

Jasim, Sarah. "MEDICAL IMAGES COMPRESSION BASED ON SPIHT AND BAT INSPIRED ALGORITHMS." Iraqi Journal for Computers and Informatics 45, no. 1 (2019): 1–5. http://dx.doi.org/10.25195/ijci.v45i1.43.

Full text
Abstract:
There is a significant necessity to compress the medical images for the purposes of communication and storage.Most currently available compression techniques produce an extremely high compression ratio with a high-quality loss. Inmedical applications, the diagnostically significant regions (interest region) should have a high image quality. Therefore, it ispreferable to compress the interest regions by utilizing the Lossless compression techniques, whilst the diagnostically lessersignificant regions (non-interest region) can be compressed by utilizing the Lossy compression techniques. In this paper, a hybridtechnique of Set Partition in Hierarchical Tree (SPIHT) and Bat inspired algorithms have been utilized for Lossless compressionthe interest region, and the non-interest region is loosely compressed with the Discrete Cosine Transform (DCT) technique.The experimental results present that the proposed hybrid technique enhances the compression performance and ratio. Also,the utilization of DCT increases compression performance with low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
49

Aiazzi, Bruno, Luciano Alparone, Stefano Baronti, Cinzia Lastri, and Massimo Selva. "Spectral Distortion in Lossy Compression of Hyperspectral Data." Journal of Electrical and Computer Engineering 2012 (2012): 1–8. http://dx.doi.org/10.1155/2012/850637.

Full text
Abstract:
Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM), is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD) for near-lossless methods, for example, differential pulse code modulation (DPCM), or mean-squared error (MSE) for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies ofinterbanddistortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to thevirtually losslessprotocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
50

Merhav, Neri. "Lossy Compression of Individual Sequences Revisited: Fundamental Limits of Finite-State Encoders." Entropy 26, no. 2 (2024): 116. http://dx.doi.org/10.3390/e26020116.

Full text
Abstract:
We extend Ziv and Lempel’s model of finite-state encoders to the realm of lossy compression of individual sequences. In particular, the model of the encoder includes a finite-state reconstruction codebook followed by an information lossless finite-state encoder that compresses the reconstruction codeword with no additional distortion. We first derive two different lower bounds to the compression ratio, which depend on the number of states of the lossless encoder. Both bounds are asymptotically achievable by conceptually simple coding schemes. We then show that when the number of states of the lossless encoder is large enough in terms of the reconstruction block length, the performance can be improved, sometimes significantly so. In particular, the improved performance is achievable using a random-coding ensemble that is universal, not only in terms of the source sequence but also in terms of the distortion measure.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography