To see the other types of publications on this topic, follow the link: Lempel-Ziv-Welch (LZW) Algorithm.

Journal articles on the topic 'Lempel-Ziv-Welch (LZW) Algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 38 journal articles for your research on the topic 'Lempel-Ziv-Welch (LZW) Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Watchanupaporn, Orawan, and Worasait Suwannik. "LZW Chromosome Encoding in Estimation of Distribution Algorithms." International Journal of Applied Evolutionary Computation 4, no. 4 (October 2013): 41–61. http://dx.doi.org/10.4018/ijaec.2013100103.

Full text
Abstract:
Estimation of distribution algorithm (EDA) can solve more complicated problems than its predecessor (Genetic Algorithm). EDA uses various methods to probabilistically model a group of highly fit individuals. Calculating the model in sophisticated EDA is very time consuming. To reduce the model building time, the authors propose compressed chromosome encoding. A chromosome is encoded using a format that can be decompressed by the Lempel-Ziv-Welch (LZW) algorithm. The authors combined LZW encoding with various EDAs and termed the class of algorithms Lempel-Ziv-Welch Estimation of Distribution Algorithms (LZWEDA). Experimental results show that LZWEDA significantly outperforms the original EDA. Finally, the authors analyze how LZW encoding transforms a fitness landscape.
APA, Harvard, Vancouver, ISO, and other styles
2

Suharso, Aries, Jejen Zaelani, and Didi Juardi. "KOMPRESI FILE MENGGUNAKAN ALGORITMA LEMPEL ZIV WELCH (LZW)." Komputasi: Jurnal Ilmiah Ilmu Komputer dan Matematika 17, no. 2 (July 14, 2020): 372–80. http://dx.doi.org/10.33751/komputasi.v17i2.2147.

Full text
Abstract:
In the field of information technology, data communication is closely related to file delivery. The size of the file is sometimes a constraint in the delivery process. Large files will take longer delivery times compared to files with smaller sizes. Therefore, to handle the problem, one of them by means of compression. This study uses an experimental method with a waterfall development model with analysis, design, coding, and testing. This application applies the Ziv Welch Lempel (LZW) algorithm. The Ziv Welch Lemp Algorithm (LZW) is included in the lossless compression technique, which is a compression technique that does not change the original data. The result of a compression assessment used the Lempel Ziv Welch (LZW) algorithm shows the average rate of compression ratio and for all types of text files by 51.04% with an average of 2.56 seconds, for an image file type of 37.26% with an average time of 0.44 seconds. Based on the average percentage of the compression ratio for all file types tested using the LZW algorithm (Lemp Ziv Welch) is 40.40% with an average time required is 1.81 seconds.
APA, Harvard, Vancouver, ISO, and other styles
3

Fauzan, Mohamad Nurkamal, Muhammad Alif, and Cahyo Prianto3. "Comparison of Huffman Algorithm and Lempel Ziv Welch Algorithm in Text File Compression." IT Journal Research and Development 7, no. 2 (December 30, 2022): 155–69. http://dx.doi.org/10.25299/itjrd.2023.10437.

Full text
Abstract:
The development of data storage hardware is very rapidly over time. In line with the development of storage hardware, the amount of digital data shared on the internet is increasing every day. That way no matter how big the size of the storage device we have, of course, it will only be a matter of time until that storage space is exhausted. Therefore, in terms of maximizing storage space, a technique called compression emerged. This study focuses on a comparative analysis of 2 Lossless compression technique algorithms, namely the Huffman algorithm and Lempel Ziv Welch (LZW). A number of test files with different file types are applied to both algorithms that are compared. The performance of the algorithm is determined based on the comparison of space saving and compression time. The test results showed that the Lempel Ziv Welch (LZW) algorithm was superior to Huffman’s algorithm in .txt file type compression and .csv, the average space savings produced were 63.85% and 77.56%. The degree of compression speed that each algorithm produces is directly proportional to the file size.
APA, Harvard, Vancouver, ISO, and other styles
4

Priya, C., T. Kesavamurthy, and M. Uma Priya. "An Efficient Lossless Medical Image Compression Using Hybrid Algorithm." Advanced Materials Research 984-985 (July 2014): 1276–81. http://dx.doi.org/10.4028/www.scientific.net/amr.984-985.1276.

Full text
Abstract:
Recently many new algorithms for image compression based on wavelets have been developed.This paper gives a detailed explanation of SPIHT algorithm with the combination of Lempel Ziv Welch compression technique for image compression by MATLAB implementation. Set partitioning in Hierarchical trees (SPIHT) is one of the most efficient algorithm known today. Pyramid structures have been created by the SPIHT algorithm based on a wavelet decomposition of an image. Lempel Ziv Welch is a universal lossless data compression algorithm guarantees that the original information can be exactly reproduced from the compressed data.The proposed methods have better compression ratio, computational speed and good reconstruction quality of the image. To analysis the proposed lossless methods here, calculate the performance metrics as Compression ratio, Mean square error, Peak signal to Noise ratio. Key Words-LempelZivWelch (LZW),SPIHT,Wavelet
APA, Harvard, Vancouver, ISO, and other styles
5

Ma, Shaowen. "Comparison of image compression techniques using Huffman and Lempel-Ziv-Welch algorithms." Applied and Computational Engineering 5, no. 1 (June 14, 2023): 793–801. http://dx.doi.org/10.54254/2755-2721/5/20230705.

Full text
Abstract:
Image compression technology is very popular in the field of image analysis because the compressed image is convenient for storage and transmission. In this paper, the Huffman algorithm and Lempel-Ziv-Welch (LZW) algorithm are introduced. They are widely used in the field of image compression, and the compressed image results of the two algorithms are calculated and compared. Based on the four dimensions of Compression Ratio (CR), Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Bits Per Pixel (BPP), the applicable conditions of the two algorithms in compressing small image files are analysed. The results illustrated that when the source image files are less than 300kb, the Compression Ratio (CR) of Huffman algorithm was better than that of LZW algorithm. However, for Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Bits Per Pixel (BPP), which are used to represent the compressed images qualities, LZW algorithm gave more satisfactory results.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Feng Yuan, Li Li Wen, Xiao Lu Jia, Zhao Li, Meng Li Chen, and Cheng Chen. "LZW Algorithm Research and Amelioration Based on Pointer Trace." Applied Mechanics and Materials 121-126 (October 2011): 4498–502. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.4498.

Full text
Abstract:
LZW (Lempel Ziv Welch) algorithm is a dictionary compression algorithm with excellent performance. The algorithm, which has important advantages including strong universal ability and a dynamically formed dictionary in coding and decoding, is used extensively in lossless data compression field. This paper implements an LZW algorithm based on a tree-like data structure in C and offers two optimization schemes, including using pointer-trace and Variable-length code. The results show that these schemes extremely improve the compression efficiency with reduced compression time cost and provide an effective guarantee for real time transmission.
APA, Harvard, Vancouver, ISO, and other styles
7

Fitra Wijayanto, Erick, Muhammad Zarlis, and Zakarias Situmorang. "Increase the PSNR Of Image Using LZW and AES Algorithm With MLSB on Steganography." International Journal of Engineering & Technology 7, no. 2.5 (March 10, 2018): 119. http://dx.doi.org/10.14419/ijet.v7i2.5.13965.

Full text
Abstract:
There are many research has done a hybridization approach of text message insertion that has been compressed with Lempel-Ziv-Welch (LZW) algorithm and has also been encrypted. The text messages in ciphertext form are inserted into the image file using LSB (Least Significant Bit) method. The results of this study indicate that the value of Peak Signal to Noise Ratio (PSNR) lower than the LSB method of 0.94 times with a ratio of 20.33%, with Kekre's method of 10.04%. To improve the value of PSNR stego image of insertion, in this research is inserted audio samples using 5 bits to reduce the amount of data insertion, so it can get the value of MSE stego image low. Prior to insertion, the text file is compressed with the Lempel-Ziv-Welch (LZW) algorithm and encrypted with the Advanced Encryption Standard (AES) algorithm. Then, the insertion of compression and encrypted text files is done with the Modified Least Significant Bit (MLSB) algorithm. To performa test reliability of steganography, the image stego image is done by calculating Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). At extraction process with MLSB algorithm, decryption with AES algorithm and decompression with LZW algorithm. The experimental results show that the MSE values obtained are lower and the proposed PSNR method is better with (α) 1,044 times than the Kaur method, et al. The result of embed text file extraction from the stego image works well resulting in encrypted and uncompressed text files.
APA, Harvard, Vancouver, ISO, and other styles
8

Sudirman, Sudirman. "Enkripsi Citra Bitmap Menggunakan Algoritma Kompresi Lampel-Ziv-Welch (LZW)." ALGORITMA : JURNAL ILMU KOMPUTER DAN INFORMATIKA 4, no. 1 (May 4, 2020): 9. http://dx.doi.org/10.30829/algoritma.v4i1.7242.

Full text
Abstract:
<p><em>The use of data in the form of image files has become more widespread in various fields. Therefore, security of image files from unauthorized persons is important. Various techniques to reduce file size, including securing files, have been developed, one of which is compression. Compression technique becomes important in processing large images such as Bitmap images. Ziv Welch seal is a loseless compression algorithm that can compress images without any loss of pixel elements in them or the resulting compression image is identical to the original image. The image will be encrypted first and then performed the compression technique using the Ziv Welch Lempel algorithm. The resulting compression is no longer in the form of images but in the form of files with the * .mat extension. The different file formats from images to non-image files can help avoid cryptanalysis</em></p>
APA, Harvard, Vancouver, ISO, and other styles
9

King, G. R. Gnana, C. Christopher Seldev, and N. Albert Singh. "A Novel Compression Technique for Compound Images Using Parallel Lempel-Ziv-Welch Algorithm." Applied Mechanics and Materials 626 (August 2014): 44–51. http://dx.doi.org/10.4028/www.scientific.net/amm.626.44.

Full text
Abstract:
Compound image is a combination of natural images, text, and graphics.This paper presents a compression technique for improving coding efficiency. The algorithm first decomposes the compound images by using 3 level biorthogonal wavelet transform and then the transformed image was further compressed by Parallel dictionary based LZW algorithm called PDLZW.In PDLZW algorithm instead of using a unique fixed word width dictionary a hierarchical variable word width dictionary set containing several dictionaries of small address space and increases the word widths used for compression and decompression algorithms. The experimental results show that the PSNR value is increased and the Mean Square error value was improved.
APA, Harvard, Vancouver, ISO, and other styles
10

Shirakol, Shrikanth, Akshata Koparde, Sandhya ., Shravan Kulkarni, and Yogesh Kini. "Performance optimization of dual stage algorithm for lossless data compression and decompression." International Journal of Engineering & Technology 7, no. 2.21 (April 20, 2018): 127. http://dx.doi.org/10.14419/ijet.v7i2.21.11849.

Full text
Abstract:
In this paper, an optimized dual stage architecture is proposed which is the combination of Lempel-Ziv-Welch (LZW) Algorithm at the first phase and Arithmetic Coding being the later part of Architecture. LZW Algorithm is a lossless compression algorithm and code here for each character is available in the dictionary which reduces 5-bits per cycle as compared to ASCII. In arithmetic coding the numbers are represented by an interval of real numbers from zero to one according to their probabilities. It is an entropy coding and is lossless in nature. The text information is allowed to pass through the proposed architecture and it gets compressed to the higher rate.
APA, Harvard, Vancouver, ISO, and other styles
11

Akoguz, A., S. Bozkurt, A. A. Gozutok, G. Alp, E. G. Turan, M. Bogaz, and S. Kent. "COMPARISON OF OPEN SOURCE COMPRESSION ALGORITHMS ON VHR REMOTE SENSING IMAGES FOR EFFICIENT STORAGE HIERARCHY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 10, 2016): 3–9. http://dx.doi.org/10.5194/isprs-archives-xli-b4-3-2016.

Full text
Abstract:
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence &amp; Spaces SPOT 6 &amp; 7 satellites having 1.5&amp;thinsp;m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA &amp; LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate &amp; Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
APA, Harvard, Vancouver, ISO, and other styles
12

Abu-Taieh, Evon, and Issam AlHadid. "CRUSH: A New Lossless Compression Algorithm." Modern Applied Science 12, no. 11 (October 29, 2018): 387. http://dx.doi.org/10.5539/mas.v12n11p387.

Full text
Abstract:
Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon&ndash;Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &amp;Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).
APA, Harvard, Vancouver, ISO, and other styles
13

Abu-Taieh, Evon, and Issam AlHadid. "CRUSH: A New Lossless Compression Algorithm." Modern Applied Science 12, no. 11 (October 29, 2018): 406. http://dx.doi.org/10.5539/mas.v12n11p406.

Full text
Abstract:
Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon&ndash;Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &amp;Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).
APA, Harvard, Vancouver, ISO, and other styles
14

Al-Ashwal, A. Y., A. H. M. Al-Mawgani, and Waled Hussein Al-Arashi. "An Image Steganography Algorithm for Hiding Data Based on HDWT, LZW and OPAP." Journal of Science and Technology 20, no. 1 (September 5, 2015): 9–21. http://dx.doi.org/10.20428/jst.v20i1.840.

Full text
Abstract:
Image steganography is the art of information hiding, which embeds a secret data into a cover image. However, high capacity of secret data and high quality of stego image are key issues in image steganography. In this paper, an image steganography technique based on Haar Discrete Wavelet Transform (HDWT), Lempel Ziv Welch (LZW) algorithm and Optimal Pixel Adjustment Process (OPAP) is proposed. The HDWT is used to increase the robustness of stego image against attacks. To increase the hidden capacity, LZW algorithm is performed on the secret message. The OPAP is then applied to reduce the embedding error between the cover image and stego image. The experimental results are evaluated by four standard images as covers, and with two types of secret messages. The results demonstrate high visual quality of stego image with large Hidden Capacity (HC) of secret data compared with recent techniques
APA, Harvard, Vancouver, ISO, and other styles
15

Ignatoski, Matea, Jonatan Lerga, Ljubiša Stanković, and Miloš Daković. "Comparison of Entropy and Dictionary Based Text Compression in English, German, French, Italian, Czech, Hungarian, Finnish, and Croatian." Mathematics 8, no. 7 (July 1, 2020): 1059. http://dx.doi.org/10.3390/math8071059.

Full text
Abstract:
The rapid growth in the amount of data in the digital world leads to the need for data compression, and so forth, reducing the number of bits needed to represent a text file, an image, audio, or video content. Compressing data saves storage capacity and speeds up data transmission. In this paper, we focus on the text compression and provide a comparison of algorithms (in particular, entropy-based arithmetic and dictionary-based Lempel–Ziv–Welch (LZW) methods) for text compression in different languages (Croatian, Finnish, Hungarian, Czech, Italian, French, German, and English). The main goal is to answer a question: ”How does the language of a text affect the compression ratio?” The results indicated that the compression ratio is affected by the size of the language alphabet, and size or type of the text. For example, The European Green Deal was compressed by 75.79%, 76.17%, 77.33%, 76.84%, 73.25%, 74.63%, 75.14%, and 74.51% using the LZW algorithm, and by 72.54%, 71.47%, 72.87%, 73.43%, 69.62%, 69.94%, 72.42% and 72% using the arithmetic algorithm for the English, German, French, Italian, Czech, Hungarian, Finnish, and Croatian versions, respectively.
APA, Harvard, Vancouver, ISO, and other styles
16

Barik, Alhassan Abdul, Mohammed Ibrahim Daabo, and Stephen Akobre. "Improved Lempel-Ziv-Welch’s Error Detection and Correction Scheme using Redundant Residue Number System (RRNS)." Circulation in Computer Science 2, no. 6 (July 20, 2017): 25–30. http://dx.doi.org/10.22632/ccs-2017-252-33.

Full text
Abstract:
The greatest difficulty of compressing data is the assurance of the security, integrity, and accuracy of the data in storage in volatile media or transmission in network communication channels. Various methods have been proposed for dealing with the accuracy and consistency of compressed and encrypted data using error detection and correction mechanisms. The Redundant Residue Number System (RRNS) which is a trait of Residue Number System (RNS) is one of the available methods for detecting and correcting errors which involves the addition of extra moduli called redundant moduli. In this paper, Residue Number System (RNS) is efficiently applied to the Lempel-Ziv-Welch (LZW) compression algorithm resulting in new LZW-RNS compression scheme using the traditional moduli set, and two redundant moduli added resulting in the moduli set {2^n-1,〖 2〗^n,〖 2〗^n+1,〖 2〗^2n-3,〖 2〗^2n+1} for the purposes of error detection and correction. This is done by constraining the data or information within the legitimate range of the dynamic range provided by the non-redundant moduli. Simulation with MatLab shows the efficiency and fault tolerance of the proposed scheme than the traditional LZW compression method and other related known state of the art schemes.
APA, Harvard, Vancouver, ISO, and other styles
17

Di, Jinhong, Pengkun Yang, Chunyan Wang, and Lichao Yan. "Layered Lossless Compression Method of Massive Fault Recording Data." International Journal of Circuits, Systems and Signal Processing 16 (January 3, 2022): 17–25. http://dx.doi.org/10.46300/9106.2022.16.3.

Full text
Abstract:
In order to overcome the problems of large error and low precision in traditional power fault record data compression, a new layered lossless compression method for massive fault record data is proposed in this paper. The algorithm applies LZW (Lempel Ziv Welch) algorithm, analyzes the LZW algorithm and existing problems, and improves the LZW algorithm. Use the index value of the dictionary to replace the input string sequence, and dynamically add unknown strings to the dictionary. The parallel search method is to divide the dictionary into several small dictionaries with different bit widths to realize the parallel search of the dictionary. According to the compression and decompression of LZW, the optimal compression effect of LZW algorithm hardware is obtained. The multi tree structure of the improved LZW algorithm is used to construct the dictionary, and the multi character parallel search method is used to query the dictionary. The multi character parallel search method is used to query the dictionary globally. At the same time, the dictionary size and update strategy of LZW algorithm are analyzed, and the optimization parameters are designed to construct and update the dictionary. Through the calculation of lossless dictionary compression, the hierarchical lossless compression of large-scale fault record data is completed. Select the optimal parameters, design the dictionary size and update strategy, and complete the lossless compression of recorded data. The experimental results show that compared with the traditional compression method, under this compression method, the mean square error percentage is effectively reduced, and the compression error and compression rate are eliminated, so as to ensure the integrity of fault record data, achieve the compression effect in a short time, and achieve the expected goal.
APA, Harvard, Vancouver, ISO, and other styles
18

Anandita, Ida Bagus Gede, I. Gede Aris Gunadi, and Gede Indrawan. "Analisis Kinerja Dan Kualitas Hasil Kompresi Pada Citra Medis Sinar-X Menggunakan Algoritma Huffman, Lempel Ziv Welch Dan Run Length Encoding." SINTECH (Science and Information Technology) Journal 1, no. 1 (February 9, 2018): 7–15. http://dx.doi.org/10.31598/sintechjournal.v1i1.179.

Full text
Abstract:
Technological progress in the medical area made medical images like X-rays stored in digital files. The medical image file is relatively large so that the image needs to be compressed. The lossless compression technique is an image compression where the decompression results are the same as the original or no information lost in the compression process. The existing algorithms on lossless compression techniques are Run Length Encoding (RLE), Huffman, and Lempel Ziv Welch (LZW). This study compared the performance of the three algorithms in compressing medical images. The result of image decompression will be compared to its performance in the objective assessment such as ratio, compression time, MSE (Mean Square Error) and PNSR (Peak Signal to Noise Ratio). MSE and PSNR are used for quantitative image quality measurement for subjective assessment assisted by three experts who will compare the original image with the decompression image. Based on the results obtained from the objective assessment of compression performance of RLE algorithm showed the best performance by yielding ratio, time, MSE and PSNR respectively 86,92%, 3,11ms, 0 and 0db. For Huffman, the results can be 12.26%, 96.94ms, 0, and 0db respectively. While LZW results can be in sequence -63.79%, 160ms, 0.3 and 58.955db. For the results of the subjective assessment, the experts argued that all images can be analyzed well.
APA, Harvard, Vancouver, ISO, and other styles
19

M.K., Bouza. "Analysis and modification of graphic data compression algorithms." Artificial Intelligence 25, no. 4 (December 25, 2020): 32–40. http://dx.doi.org/10.15407/jai2020.04.032.

Full text
Abstract:
The article examines the algorithms for JPEG and JPEG-2000 compression of various graphic images. The main steps of the operation of both algorithms are given, their advantages and disadvantages are noted. The main differences between JPEG and JPEG-2000 are analyzed. It is noted that the JPEG-2000 algorithm allows re-moving visually unpleasant effects. This makes it possible to highlight important areas of the image and improve the quality of their compression. The features of each step of the algorithms are considered and the difficulties of their implementation are compared. The effectiveness of each algorithm is demonstrated by the example of a full-color image of the BSU emblem. The obtained compression ratios were obtained and shown in the corresponding tables using both algorithms. Compression ratios are obtained for a wide range of quality values from 1 to ten. We studied various types of images: black and white, business graphics, indexed and full color. A modified LZW-Lempel-Ziv-Welch algorithm is presented, which is applicable to compress a variety of information from text to images. The modification is based on limiting the graphic file to 256 colors. This made it possible to index the color with one byte instead of three. The efficiency of this modification grows with increasing image sizes. The modified LZW-algorithm can be adapted to any image from single-color to full-color. The prepared tests were indexed to the required number of colors in the images using the FastStone Image Viewer program. For each image, seven copies were obtained, containing 4, 8, 16, 32, 64, 128 and 256 colors, respectively. Testing results showed that the modified version of the LZW algorithm allows for an average of twice the compression ratio. However, in a class of full-color images, both algorithms showed the same results. The developed modification of the LZW algorithm can be successfully applied in the field of site design, especially in the case of so-called flat design. The comparative characteristics of the basic and modified methods are presented.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Yong, Bing Li, Yan Zhang, and Xia Zhao. "A Huffman-Based Joint Compression and Encryption Scheme for Secure Data Storage Using Physical Unclonable Functions." Electronics 10, no. 11 (May 25, 2021): 1267. http://dx.doi.org/10.3390/electronics10111267.

Full text
Abstract:
With the developments of Internet of Things (IoT) and cloud-computing technologies, cloud servers need storage of a huge volume of IoT data with high throughput and robust security. Joint Compression and Encryption (JCAE) scheme based on Huffman algorithm has been regarded as a promising technology to enhance the data storage method. Existing JCAE schemes still have the following limitations: (1) The keys in the JCAE would be cracked by physical and cloning attacks; (2) The rebuilding of Huffman tree reduces the operational efficiency; (3) The compression ratio should be further improved. In this paper, a Huffman-based JCAE scheme using Physical Unclonable Functions (PUFs) is proposed. It provides physically secure keys with PUFs, efficient Huffman tree mutation without rebuilding, and practical compression ratio by combining the Lempel-Ziv and Welch (LZW) algorithm. The performance of the instanced PUFs and the derived keys was evaluated. Moreover, our scheme was demonstrated in a file protection system with the average throughput of 473Mbps and the average compression ratio of 0.5586. Finally, the security analysis shows that our scheme resists physical and cloning attacks as well as several classic attacks, thus improving the security level of existing data protection methods.
APA, Harvard, Vancouver, ISO, and other styles
21

Revathi, S., and D. Thiripurasundari. "An Approach to Efficient Dictionary Utilization and Improved Data Compression Technique for LZW Algorithm." International Journal of Engineering and Advanced Technology 10, no. 2 (December 30, 2020): 224–29. http://dx.doi.org/10.35940/ijeat.b2097.1210220.

Full text
Abstract:
This paper proposes an improved data compression technique compared to existing Lempel-Ziv-Welch (LZW) algorithm. LZW is a dictionary-updation based compression technique which stores elements from the data in the form of codes and uses them when those strings recur again. When the dictionary gets full, every element in the dictionary are removed in order to update dictionary with new entry. Therefore, the conventional method doesn’t consider frequently used strings and removes all the entry. This method is not an effective compression when the data to be compressed are large and when there are more frequently occurring string. This paper presents two new methods which are an improvement for the existing LZW compression algorithm. In this method, when the dictionary gets full, the elements that haven’t been used earlier are removed rather than removing every element of the dictionary which happens in the existing LZW algorithm. This is achieved by adding a flag to every element of the dictionary. Whenever an element is used the flag is set high. Thus, when the dictionary gets full, the dictionary entries where the flag was set high are kept and others are discarded. In the first method, the entries are discarded abruptly, whereas in the second method the unused elements are removed once at a time. Therefore, the second method gives enough time for the nascent elements of the dictionary. These techniques all fetch similar results when data set is small. This happens due to the fact that difference in the way they handle the dictionary when it’s full. Thus these improvements fetch better results only when a relatively large data is used. When all the three techniques' models were used to compare a data set with yields best case scenario, the compression ratios of conventional LZW is small compared to improved LZW method-1 and which in turn is small compared to improved LZW method-2.
APA, Harvard, Vancouver, ISO, and other styles
22

Gabriel, Arome Junior, and Emmanuel Akindoyin Awosola. "Secure Electronic Commerce Transactions Using Pythagorean Triple-based Cryptography and Audio Steganography." Applied and Computational Engineering 2, no. 1 (March 22, 2023): 578–87. http://dx.doi.org/10.54254/2755-2721/2/20220613.

Full text
Abstract:
The rapidly increasing utilization of Information Technology (IT) in electronic commerce and other aspects of human existence has led to a rise in worries about the privacy and safe-ty/security of those involved. Due to the alarmingly high frequency with which data or user privacy breaches occur today, conducting online transactions can represent a major threat to buyers (and even business owners') privacy. Existing solutions to these security issues are still vulnerable to classic phishing scams. As a result, the construction of a robust and effi-cient audio steganography system for secure electronic commerce transactions is presented in this work. The developed system first encrypts sensitive financial data using the Pythagorean Triple-based Cryptographic (PTC) algorithm, then compresses the resulting cipher-text using the Lempel Ziv Welch (LZW) and Huffman's Compression Algorithms, and finally embeds the compressed file (using one-dimensional Discrete Cosine Transform (DCT)) in a suitable audio-file cover to produce an output (stego file) that is indistinguishable from the cover file. This output can then be exchanged between communicating entities across open networks. The new system has a little or no alteration in its outputs, as evidenced by the Signal-to-Noise Ratio (SNR), in the experimental results gotten.
APA, Harvard, Vancouver, ISO, and other styles
23

Gómez García, Carlos Andres, and Jaime Velasco Medina. "Compression and Encryption of Vital Signals Using an SoC-FPGA." DYNA 88, no. 219 (December 2, 2021): 147–54. http://dx.doi.org/10.15446/dyna.v88n219.92532.

Full text
Abstract:
This article presents the implementation of a remote monitoring system of biomedical signals with cybersecurity support and compression of vital sign signals and data of the patient. This system uses a low-cost microsystem for encrypting and compressing the information using the Lempel–Ziv–Welch (LZW) lossless compression algorithm and the Advanced Encryption Standard (AES). In this case, WolfSSL library is used for the implementation of the Transport Layer Security (TLS) protocol, whose encryption function is accelerated by the AES processor designed on a System on Chip - Field Programmable Gate Array (SoC-FPGA) device. A multiparameter board and an SoC-FPGA development board make up the vital signs measurement system, which was calibrated and verified by a commercial patient simulator. Data transmission tests were carried out from the measurement system to the monitoring application developed in LabVIEW and implemented on a personal computer (PC), where vital signs and data of the patient are decrypted and decompressed. Also, it was possible to verify a significant improvement in the performance of the TLS connection. From the results obtained, it can be concluded that the designed microsystem allows to compress, encrypt, and transmit biomedical data in real-time and without loss of information. The microsystem is very suitable for e-health platforms and/or e-health devices that use unsecured communication networks with limited bandwidth.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Zheng-An, Yun-Chung Lu, Chih-Hsiang You, and Chen-Kuo Chiang. "Deep Learning for Sensor-Based Rehabilitation Exercise Recognition and Evaluation." Sensors 19, no. 4 (February 20, 2019): 887. http://dx.doi.org/10.3390/s19040887.

Full text
Abstract:
In this paper, a multipath convolutional neural network (MP-CNN) is proposed for rehabilitation exercise recognition using sensor data. It consists of two novel components: a dynamic convolutional neural network (D-CNN) and a state transition probability CNN (S-CNN). In the D-CNN, Gaussian mixture models (GMMs) are exploited to capture the distribution of sensor data for the body movements of the physical rehabilitation exercises. Then, the input signals and the GMMs are screened into different segments. These form multiple paths in the CNN. The S-CNN uses a modified Lempel–Ziv–Welch (LZW) algorithm to extract the transition probabilities of hidden states as discriminate features of different movements. Then, the D-CNN and the S-CNN are combined to build the MP-CNN. To evaluate the rehabilitation exercise, a special evaluation matrix is proposed along with the deep learning classifier to learn the general feature representation for each class of rehabilitation exercise at different levels. Then, for any rehabilitation exercise, it can be classified by the deep learning model and compared to the learned best features. The distance to the best feature is used as the score for the evaluation. We demonstrate our method with our collected dataset and several activity recognition datasets. The classification results are superior when compared to those obtained using other deep learning models, and the evaluation scores are effective for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Romankevych, Vitalii O., Ivan V. Mozghovyi, Pavlo A. Serhiienko, and Lefteris Zacharioudakis. "Decompressor for hardware applications." Applied Aspects of Information Technology 6, no. 1 (April 10, 2023): 74–83. http://dx.doi.org/10.15276/aait.06.2023.6.

Full text
Abstract:
The use of lossless compression in the application specific computers provides such advantages as minimized amount of memory, increased bandwidth of interfaces, reduced energy consumption, and improved self-testing systems. The article discusses known algorithms of lossless compression with the aim of choosing the most suitable one for implementation in a hardware-software decompressor. Among them, the Lempel-Ziv-Welch (LZW) algorithm makes it possible to perform the associative memory of the decompressor dictionary in the simplest way by using the sequential reading the symbols of the decompressed word. The analysis of the existing hardware implementations of the decompressors showed that the main goal in their development was to increase the bandwidth at the expense of increasing hardware costs and limited functionality. It is proposed to implement the LZW decompressor in a hardware module based on a microprocessor core with a specialized instruction set. For this, a processor core with a stack architecture was selected, which is developed by the authors for the tasks of the file grammar analyzing. Additional memory block for the dictionary storing and an input buffer which converts the byte stream of the packed file into a sequence of unpacked codes are added to it. The processor core instruction set is adjusted to both speed up decompression and reduce hardware costs. The decompressor is described by the Very high-speed integral circuit Hardware Description Language and is implemented in a field programable gate array (FPGA). At a clock frequency of up to two hundred megahertz, the average throughput of the decompressor is more than ten megabytes per second. Because of the hardware and software implementation, an LZW decompressor is developed, which has approximately the same hardware costs as that of the hardware decompressor and has a lower bandwidth at the costs of flexibility, multifunctionality, which is provided by the processor core software. In particular, a decompressor of the Graphic Interchange Format files is implemented on the basis of this device in FPGA for the application of dynamic visualization of patterns on the embedded system display
APA, Harvard, Vancouver, ISO, and other styles
26

Ruffini, Giulio, Giada Damiani, Diego Lozano-Soldevilla, Nikolas Deco, Fernando E. Rosas, Narsis A. Kiani, Adrián Ponce-Alvarez, Morten L. Kringelbach, Robin Carhart-Harris, and Gustavo Deco. "LSD-induced increase of Ising temperature and algorithmic complexity of brain dynamics." PLOS Computational Biology 19, no. 2 (February 3, 2023): e1010811. http://dx.doi.org/10.1371/journal.pcbi.1010811.

Full text
Abstract:
A topic of growing interest in computational neuroscience is the discovery of fundamental principles underlying global dynamics and the self-organization of the brain. In particular, the notion that the brain operates near criticality has gained considerable support, and recent work has shown that the dynamics of different brain states may be modeled by pairwise maximum entropy Ising models at various distances from a phase transition, i.e., from criticality. Here we aim to characterize two brain states (psychedelics-induced and placebo) as captured by functional magnetic resonance imaging (fMRI), with features derived from the Ising spin model formalism (system temperature, critical point, susceptibility) and from algorithmic complexity. We hypothesized, along the lines of the entropic brain hypothesis, that psychedelics drive brain dynamics into a more disordered state at a higher Ising temperature and increased complexity. We analyze resting state blood-oxygen-level-dependent (BOLD) fMRI data collected in an earlier study from fifteen subjects in a control condition (placebo) and during ingestion of lysergic acid diethylamide (LSD). Working with the automated anatomical labeling (AAL) brain parcellation, we first create “archetype” Ising models representative of the entire dataset (global) and of the data in each condition. Remarkably, we find that such archetypes exhibit a strong correlation with an average structural connectome template obtained from dMRI (r = 0.6). We compare the archetypes from the two conditions and find that the Ising connectivity in the LSD condition is lower than the placebo one, especially in homotopic links (interhemispheric connectivity), reflecting a significant decrease of homotopic functional connectivity in the LSD condition. The global archetype is then personalized for each individual and condition by adjusting the system temperature. The resulting temperatures are all near but above the critical point of the model in the paramagnetic (disordered) phase. The individualized Ising temperatures are higher in the LSD condition than the placebo condition (p = 9 × 10−5). Next, we estimate the Lempel-Ziv-Welch (LZW) complexity of the binarized BOLD data and the synthetic data generated with the individualized model using the Metropolis algorithm for each participant and condition. The LZW complexity computed from experimental data reveals a weak statistical relationship with condition (p = 0.04 one-tailed Wilcoxon test) and none with Ising temperature (r(13) = 0.13, p = 0.65), presumably because of the limited length of the BOLD time series. Similarly, we explore complexity using the block decomposition method (BDM), a more advanced method for estimating algorithmic complexity. The BDM complexity of the experimental data displays a significant correlation with Ising temperature (r(13) = 0.56, p = 0.03) and a weak but significant correlation with condition (p = 0.04, one-tailed Wilcoxon test). This study suggests that the effects of LSD increase the complexity of brain dynamics by loosening interhemispheric connectivity—especially homotopic links. In agreement with earlier work using the Ising formalism with BOLD data, we find the brain state in the placebo condition is already above the critical point, with LSD resulting in a shift further away from criticality into a more disordered state.
APA, Harvard, Vancouver, ISO, and other styles
27

Arroyuelo, Diego, Rodrigo Cánovas, Johannes Fischer, Dominik Köppl, Marvin Löbel, Gonzalo Navarro, and Rajeev Raman. "Engineering Practical Lempel-Ziv Tries." ACM Journal of Experimental Algorithmics 26 (December 31, 2021): 1–47. http://dx.doi.org/10.1145/3481638.

Full text
Abstract:
The Lempel-Ziv 78 ( LZ78 ) and Lempel-Ziv-Welch ( LZW ) text factorizations are popular, not only for bare compression but also for building compressed data structures on top of them. Their regular factor structure makes them computable within space bounded by the compressed output size. In this article, we carry out the first thorough study of low-memory LZ78 and LZW text factorization algorithms, introducing more efficient alternatives to the classical methods, as well as new techniques that can run within less memory space than the necessary to hold the compressed file. Our results build on hash-based representations of tries that may have independent interest.
APA, Harvard, Vancouver, ISO, and other styles
28

Freudenberger, Jürgen, Mohammed Rajab, Daniel Rohweder, and Malek Safieh. "A Codec Architecture for the Compression of Short Data Blocks." Journal of Circuits, Systems and Computers 27, no. 02 (September 11, 2017): 1850019. http://dx.doi.org/10.1142/s0218126618500196.

Full text
Abstract:
This work proposes a lossless data compression algorithm for short data blocks. The proposed compression scheme combines a modified move-to-front algorithm with Huffman coding. This algorithm is applicable in storage systems where the data compression is performed on block level with short block sizes, in particular, in non-volatile memories. For block sizes in the range of 1[Formula: see text]kB, it provides a compression gain comparable to the Lempel–Ziv–Welch algorithm. Moreover, encoder and decoder architectures are proposed that have low memory requirements and provide fast data encoding and decoding.
APA, Harvard, Vancouver, ISO, and other styles
29

Tütüncü, Kemal, and Özcan Çataltaş. "Compensation of degradation, security, and capacity of LSB substitution methods by a new proposed hybrid n-LSB approach." Computer Science and Information Systems, no. 00 (2021): 48. http://dx.doi.org/10.2298/csis210227048t.

Full text
Abstract:
This study proposes a new hybrid n-LSB (Least Significant Bit) substitution-based image steganography method in the spatial plane. The previously proposed n-LSB substitution method by authors of this paper is combined with the Rivest-Shamir-Adleman (RSA), RC5, and Data Encryption Standard (DES) encryption algorithms to improve the security of the steganography, which is one of the requirements of steganography, and the Lempel-Ziv-Welch (LZW), Arithmetic and Deflate lossless compression algorithms to increase the secret message capacity. Also, embedding was done randomly using a logistic map-based chaos generator to increase the security more. The classical n-LSB substitution method and the proposed hybrid approaches based on the previously proposed n-LSB were implemented using different secret messages and cover images. When the results were examined, it has been seen that the proposed hybrid n-LSB approach showed improvement in all three criteria of steganography. The proposed hybrid approach that consists of previously proposed n-LSB, RSA, Deflate, and the logistic map had the best results regarding capacity, security, and imperceptibility.
APA, Harvard, Vancouver, ISO, and other styles
30

"Intensification of Lempel-ZIV-Welch Algorithm." International Journal of Innovative Technology and Exploring Engineering 8, no. 9S (August 23, 2019): 587–91. http://dx.doi.org/10.35940/ijitee.i1092.0789s19.

Full text
Abstract:
There is a necessity to reduce the consumption of exclusive resources. This is achieved using data compression. The data compression is one well known technique which can reduce the file size. A plethora of data compression algorithms are available which provides compression in various ratios. LZW is one of the powerful widely used algorithms. This paper attempts to propose and apply some enhancements to LZW, hence comes out with an efficient lossless text compression scheme that can compress a given file at better compression ratio. The paper proposes three approaches which practically enhances the original algorithm. These approaches try to gain better compression ratio. In approach1, it exploits the notion of using existing string code with odd code for a newly encounter string which is reverse of existing. In approach2 it uses a choice of code length for the current compression, so avoiding the problem of dictionary overflow. In approach3 it appends some selective set of frequently encountered string patterns. So the intensified LZW method provides better compression ratio with the inclusion of the above features.
APA, Harvard, Vancouver, ISO, and other styles
31

Ibrahim, MB, and KA Gbolagade. "Enhancing Computational Time of Lempel-Ziv-Welch-Based Text Compression with Chinese Remainder Theorem." Journal of Computer Science and Its Application 27, no. 1 (August 7, 2020). http://dx.doi.org/10.4314/jcsia.v27i1.9.

Full text
Abstract:
The science and art of data compression is presenting information in a compact form. This compact representation of information is generated by recognizing the use of structures that exist in the data. The Lempel-Ziv-Welch (LZW) algorithm is known to be one of the best compressors of text which achieve a high degree of compression. This is possible for text files with lots of redundancies. Thus, the greater the redundancies, the greater the compression achieved. In this paper, the LZW algorithm is further enhanced to achieve a higher degree of compression without compromising its performances through the introduction of an algorithm, called Chinese Remainder Theorem (CRT), is presented. Compression Time and Compression Ratio was used for performance metrics. Simulations was carried out using MATLAB for five (5) text files (of varying sizes) in determining the efficiency of the proposed CRT-LZW technique. This new technique has opened a new development of increasing the speed of compressing data than the traditional LZW. The results show that the CRT-LZW performs better than LZW in terms of computational time by 0.12s to 15.15s, while the compression ratio remains same with 2.56% respectively. The proposed compression time also performed better than some investigative papers implementing LZW-RNS by 0.12s to 2.86s and another by 0.12s to 0.14s. Keywords: Data Compression, Lempel-Ziv-Welch (LZW) algorithm, Enhancement, Chinese Remainder Theorem (CRT), Text files.
APA, Harvard, Vancouver, ISO, and other styles
32

"Genomic Sequence Data Compression using Lempel-Ziv-Welch Algorithm with Indexed Multiple Dictionary." International Journal of Engineering and Advanced Technology 9, no. 2 (December 30, 2019): 541–47. http://dx.doi.org/10.35940/ijeat.b3278.129219.

Full text
Abstract:
With the advancement in technology and development of High Throughput System (HTS), the amount of genomic data generated per day per laboratory across the globe is surpassing the Moore’s law. The huge amount of data generated is of concern to the biologists with respect to their storage as well as transmission across different locations for further analysis. Compression of the genomic data is the wise option to overcome the problems arising from the data deluge. This paper discusses various algorithms that exists for compression of genomic data as well as a few general purpose algorithms and proposes a LZW-based compression algorithm that uses indexed multiple dictionaries for compression. The proposed method exhibits an average compression ratio of 0.41 bits per base and an average compression time of 6.45 secs for a DNA sequence of an average size 105.9 KB.
APA, Harvard, Vancouver, ISO, and other styles
33

Rosalina, Rosalina, and Wahyu Hidayat. "SMS Compression System using Arithmetic Coding Algorithm." IT for Society 3, no. 02 (February 18, 2018). http://dx.doi.org/10.33021/itfs.v3i02.586.

Full text
Abstract:
<p>Short Message Service (SMS) has limitation in the length of its text message, which only provides 160 characters per SMS. It means that if we send more than 160 characters, it will be considered as sending more than one SMS, so that we have to spend more cost for sending SMS. On the other side, the Arithmetic Coding algorithm provides an effective mechanism for text compression. It has performed the great compression result and in many case it was considered as the better compression algorithm than other ones, such as Huffman and LZW (Lempel-Ziv-Welch). This research will implement the Arithmetic Coding algorithm to develop an application that will compress the SMS text message. The concept of Arithmetic Coding will be implemented to compress the SMS text message before it is sent from the sender to the receiver. The application is called CheaperZipper (CZ). This application will handle the sending and receiving SMS in the hand phone by preceding the compression and decompression process. </p>
APA, Harvard, Vancouver, ISO, and other styles
34

Mazin, Zahraa, and Saif Al Alak. "A Developed Compression Scheme to Optimize Data Transmission in Wireless Sensor Networks." Iraqi Journal of Science, March 30, 2023, 1463–76. http://dx.doi.org/10.24996/ijs.2023.64.3.35.

Full text
Abstract:
Improving performance is an important issue in Wireless Sensor Networks (WSN). WSN has many limitations including network performance. The research question is how to reduce the amount of data transmitted to improve network performance? The work will include one of the dictionary compression methods which is Lempel Ziv Welch(LZW). One problem with the dictionary method is that the token size is fixed. The LZW dictionary method is not very useful with little data, because it loses many bytes when storing small-sized tokens. From the results obtained, the best compression ratios were in the proposed algorithm. The proposed work suggests using a dynamic size token where the tokens are classified according to their size(one byte, two bytes, or three bytes). The main idea of the proposed work is based on increasing the frequency of data to increase the compression ratio. To increase the frequency of data, the work suggests keeping the amount of incremental reading data instead of keeping the whole real data. Because the climate reading data changes very slowly, the amount of change would be frequent.
APA, Harvard, Vancouver, ISO, and other styles
35

"Improved LZW Compression Technique using Difference Method." International Journal of Innovative Technology and Exploring Engineering 9, no. 5 (March 10, 2020): 87–92. http://dx.doi.org/10.35940/ijitee.e2216.039520.

Full text
Abstract:
This work attempts to give a best approach for selecting one of the popular image compression algorithm. The proposed method is designed to find the best performance approach amongst the several compression algorithms. In this work existing lossless compression technique LZW (Lempel-Ziv-Welch) is redesigned to achieve better compression ratio. LZW compression technique works on the basis of repetition of data. In a situation where all the values are distinct or repetition of data does not present the LZW can’t work properly. To avoid this problem, difference method called difference matrix method which is actually calculate the difference between two consequence data and store it in a resultant matrix is used. In this case the matrix contains repetitive data which is more effective compared to LZW technique. Another problem of LZW is dictionary overflow, because of LZW works on ASCII character there is a limit of 256 dictionary length initially. In this work dynamic dictionary method is used without using the ASCII rather than this static method. As a result, this dictionary can contain the initial value anything in a range of -256 to 255. Here ASCII values are not used because the proposed method is applicable grayscale image, where the pixel values are between in range 0 to 255. Using these two changes the proposed improved LZW method becomes more powerful that can compress a non-repetitive set of data significantly. The proposed method is applied on many standard gray images found in the literature achieved 7% to 18% more compression the normal LZW keeping quality of the image same as existing.
APA, Harvard, Vancouver, ISO, and other styles
36

Manga, I., E. J. Garba, and A. S. Ahmadu. "Enhanced Image Compression and Processing Scheme." Current Journal of Applied Science and Technology, December 14, 2021, 1–11. http://dx.doi.org/10.9734/cjast/2021/v40i3831586.

Full text
Abstract:
Image compression refers to the process of encoding image using fewer number of bits. The major aim of lossless image compression is to reduce the redundancy and irreverence of image data for better storage and transmission of data in the better form. The lossy compression scheme leads to high compression ratio while the image experiences lost in quality. However, there are many cases where the loss of image quality or information due to compression needs to be avoided, such as medical, artistic and scientific images. Efficient lossless compression become paramount, although the lossy compressed images are usually satisfactory in divers’ cases. This paper titled Enhanced Lossless Image Compression Scheme is aimed at providing an enhanced lossless image compression scheme based on Bose, Chaudhuri Hocquenghem- Lempel Ziv Welch (BCH-LZW) lossless image compression scheme using Gaussian filter for image enhancement and noise reduction. In this paper, an efficient and effective lossless image compression technique based on LZW- BCH lossless image compression to reduce redundancies in the image was presented and image enhancement using Gaussian filter algorithm was demonstrated. Secondary method of data collection was used to collect the data. Standard research images were used to validate the new scheme. To achieve these, an object approach using Java net beans was used to develop the compression scheme. From the findings, it was revealed that the average compression ratio of the enhanced lossless image compression scheme was 1.6489 and the average bit per pixel was 5.416667. Gaussian filter image enhancement was used for noise reduction and the image was enhanced eight times the original.
APA, Harvard, Vancouver, ISO, and other styles
37

Ibrahim, M. B., and K. A. Gbolagade. "A Chinese Remainder Theorem Based Enhancements of Lempel-ziv-welch and Huffman Coding Image Compression." Asian Journal of Research in Computer Science, July 6, 2019, 1–9. http://dx.doi.org/10.9734/ajrcos/2019/v3i330096.

Full text
Abstract:
Data size minimization is the focus of data compression procedures by altering representation and reducing redundancy of information to a more effective kind. In general, lossless approach is favoured by a number of compression methods for the purpose of maintaining the content integrity of the file afterwards. The benefits of compression include saving storage space, speed up of data transmission and high quality of data. This paper observes the effectiveness of Chinese Remainder Theorem (CRT) enhancement in the implementation of Lempel-Ziv-Welch (LZW) and Huffman coding algorithms for the purpose of compressing large size images. Ten images of Yale database was used for testing. The outcomes revealed that CRT-LZW compression saved more space and speedy compression (or redundancy removal) of original images to CRT-Huffman coding by 29.78% to 14.00% respectively. In terms of compression time, CRT-LZW approach outperformed CRT-Huffman approach by 9.95 sec. to 19.15 sec. For compression ratio, CRT-LZW also outperformed CRT-Huffman coding by 0.39 db to 4.38 db, which is connected to low quality and imperceptibility of the former. Similarly, CRT-Huffman coding (28.13db) offered better quality Peak-Signal-to-Noise-Ratio (PSNR) for the reconstructed images when compared to CRT-LZW (3.54db) and (25.59db) obtained in other investigated paper.
APA, Harvard, Vancouver, ISO, and other styles
38

Salman, Nassir H., and Enas Kh Hassan. "Run Length Encoding Based Lossless MRI Image Compression Using LZW and Adaptive Variable Length Coding." Journal of Southwest Jiaotong University 54, no. 4 (2019). http://dx.doi.org/10.35741/issn.0258-2724.54.4.23.

Full text
Abstract:
Medical image compression is considered one of the most important research fields nowadays in biomedical applications. The majority of medical images must be compressed without loss because each pixel information is of great value. With the widespread use of applications concerning medical imaging in the health-care context and the increased significance in telemedicine technologies, it has become crucial to minimize both the storage and bandwidth requirements needed for archiving and transmission of medical imaging data, rather by employing means of lossless image compression algorithms. Furthermore, providing high resolution and image quality preservation of the processed image data has become of great benefit. The proposed system introduces a lossless image compression technique based on Run Length Encoding (RLE) that encodes the original magnetic resonance imaging (MRI) image into actual values and their numbers of occurrence. The actual image data values are separated from their runs and they are stored in a vector array. Lempel–Ziv–Welch (LZW) is used to provide further compression that is applied to values array only. Finally the Variable Length Coding (VLC) will be applied to code the values and runs arrays for the precise amount of bits adaptively into a binary file. These bit streams are reconstructed using inverse LZW of the values array and inverse RLE to reconstruct the input image. The obtained compression gain is enhanced by 25% after applying LZW to the values array.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography