To see the other types of publications on this topic, follow the link: Huffman coding.

Dissertations / Theses on the topic 'Huffman coding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Huffman coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kilic, Suha. "Modification of Huffman Coding." Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zou, Xin. "Compression and Decompression of Color MRI Image by Huffman Coding." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17029.

Full text
Abstract:
MRI image (Magnetic Resonance Imaging) as a universal body checkup method in modern medicine. It can help doctors to analyze the condition of patients as soon as possible. As the medical images, the MRI images have high quality and a large amount of data, which requires more transmission time and larger storage capacity. To reduce transmission time and storage capacity, the compression and decompression technology is applied. Now most MRI images are colour, but most theses still use gray MRI images to research. Compressed color MRI images is a new research area. In this thesis, some basic theories of the compression technoloy and medical technology were firstly introduced, then basic strcture and kernel algorithm of Huffman coding were explained in detail. Finally, Huffman coding was implemented in MATLAB to compress and decompress the colour MRI images.The result of the experiment shows that the Huffman coding in colour MRI image compression can get high compression ratio and coding efficient.
APA, Harvard, Vancouver, ISO, and other styles
3

Machado, Lennon de Almeida. "Busca indexada de padrões em textos comprimidos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-09062010-222653/.

Full text
Abstract:
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória.
Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
APA, Harvard, Vancouver, ISO, and other styles
4

Kailasanathan, Chandrapal. "Securing digital images." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.150935/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lúdik, Michal. "Porovnání hlasových a audio kodeků." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219793.

Full text
Abstract:
This thesis deals with description of human hearing, audio and speech codecs, description of objective measure of quality and practical comparison of codecs. Chapter about audio codecs consists of description of lossless codec FLAC and lossy codecs MP3 and Ogg Vorbis. In chapter about speech codecs is description of linear predictive coding and G.729 and OPUS codecs. Evaluation of quality consists of description of segmental signal-to- noise ratio and perceptual evaluation of quality – WSS and PESQ. Last chapter deals with description od practical part of this thesis, that is comparison of memory and time consumption of audio codecs and perceptual evaluation of speech codecs quality.
APA, Harvard, Vancouver, ISO, and other styles
6

Friedrich, Tomáš. "Komprese DNA sekvencí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237222.

Full text
Abstract:
The increasing volume of biological data requires finding new ways to save these data in genetic banks. The target of this work is design and implementation of a novel algorithm for compression of DNA sequences. The algorithm is based on aligning DNA sequences agains a reference sequence and storing only diferencies between sequence and reference model. The work contains basic prerequisities from molecular biology which are needed for understanding of algorithm details. Next aligment algorithms and common compress schemes suitable for storing of diferencies agains reference sequence are described. The work continues with a description of implementation, which is follewed by derivation of time and space complexity and comparison with common compression algorithms. Further continuation of this thesis is discussed in conclusion.
APA, Harvard, Vancouver, ISO, and other styles
7

Románek, Karel. "Nezávislý datalogger s USB připojením." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219113.

Full text
Abstract:
This thesis treats concept of autonomous temperature, relative humidity and pressure USB datalogger. Also there is explained datalogger function, hardware design with respect on device consumption and design of chassis. Furthermore, there is described communication protocol for control and reading out data by the PC. Furthermore, there are described firmware drivers for some used components and modules for USB communication, RTC and data compression. Lastly there is described software which is used for datalogger configuration and data read out.
APA, Harvard, Vancouver, ISO, and other styles
8

Had, Filip. "Komprese signálů EKG nasnímaných pomocí mobilního zařízení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316832.

Full text
Abstract:
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.
APA, Harvard, Vancouver, ISO, and other styles
9

Krejčí, Michal. "Komprese dat." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217934.

Full text
Abstract:
This thesis deals with lossless and losing methods of data compressions and their possible applications in the measurement engineering. In the first part of the thesis there is a theoretical elaboration which informs the reader about the basic terminology, the reasons of data compression, the usage of data compression in standard practice and the division of compression algorithms. The practical part of thesis deals with the realization of the compress algorithms in Matlab and LabWindows/CVI.
APA, Harvard, Vancouver, ISO, and other styles
10

Ondra, Josef. "Komprese signálů EKG s využitím vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217209.

Full text
Abstract:
Signal compression is daily-used tool for memory capacities reduction and for fast data communication. Methods based on wavelet transform seem to be very effective nowadays. Signal decomposition with a suitable bank filters following with coefficients quantization represents one of the available technique. After packing quantized coefficients into one sequence, run length coding together with Huffman coding are implemented. This thesis focuses on compression effectiveness for the different wavelet transform and quantization settings.
APA, Harvard, Vancouver, ISO, and other styles
11

Grigoli, Francesco. "Studio dei codici, trasmissione e correzione efficiente di un messaggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20965/.

Full text
Abstract:
L'elaborato si prefigge di descrivere come avviene la codifica, la decodifica e la correzione di errori in una trasmissione dati, sfruttando l'entropia di Shannon, la codifica di Huffmann e i codici di Hamming.
APA, Harvard, Vancouver, ISO, and other styles
12

Štys, Jiří. "Implementace statistických kompresních metod." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-413295.

Full text
Abstract:
This thesis describes Burrow-Wheeler compression algorithm. It focuses on each part of Burrow-Wheeler algorithm, most of all on and entropic coders. In section are described methods like move to front, inverse frequences, interval coding, etc. Among the described entropy coders are Huffman, arithmetic and Rice-Golomg coders. In conclusion there is testing of described methods of global structure transformation and entropic coders. Best combinations are compared with the most common compress algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Dvořák, Martin. "Výukový video kodek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219882.

Full text
Abstract:
The first goal of diploma thesis is to study the basic principles of video signal compression. Introduction to techniques used to reduce irrelevancy and redundancy in the video signal. The second goal is, on the basis of information about compression tools, implement the individual compression tools in the programming environment of Matlab and assemble simple model of the video codec. Diploma thesis contains a description of the three basic blocks, namely - interframe coding, intraframe coding and coding with variable length word - according the standard MPEG-2.
APA, Harvard, Vancouver, ISO, and other styles
14

Chang, Chih-Peng, and 張志鵬. "Segmented Vertex Chain Coding with Huffman Coding." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/56936669407017461858.

Full text
Abstract:
碩士
朝陽科技大學
資訊工程系碩士班
96
To significantly decrease the amount of information while still preserving the contour shape, chain coding is widely applied to digital images analysis, especially to those raster-shaped ones. In this paper, chain coding is integrated with the Single-side Grown Huffman Table (SGHT) to improve the data compression rate.
APA, Harvard, Vancouver, ISO, and other styles
15

Baltaji, Najad Borhan. "Scan test data compression using alternate Huffman coding." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5615.

Full text
Abstract:
Huffman coding is a good method for statistically compressing test data with high compression rates. Unfortunately, the on-­‐chip decoder to decompress that encoded test data after it is loaded onto the chip may be too complex. With limited die area, the decoder complexity becomes a drawback. This makes Huffman coding not ideal for use in scan data compression. Selectively encoding test data using Huffman coding can provide similarly high compression rates while reducing the complexity of the decoder. A smaller and less complex decoder makes Alternate Huffman Coding a viable option for compressing and decompressing scan test data.
text
APA, Harvard, Vancouver, ISO, and other styles
16

Zheng, Li-Wen, and 鄭力文. "Personalize metro-style user interface by Huffman coding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/rdxj52.

Full text
Abstract:
碩士
國立臺灣大學
工程科學及海洋工程學研究所
105
In order to satisfy the needs of visual information for user interface, Microsoft proposed the metro UI design with its dynamic bricks to show the importance of the various functions in operating system and got lots of attention, so that many websites have begun to follow this concept designing their user interface. This research uses this concept to build the new user interface with the Web portal and dynamically present usage requirements. However, the size of the current dynamic brick and the layout are required to be set in advance, and it has no effective method for automatic calculation. In this study, an automated Metro UI user interface algorithm is proposed to automatically calculate the dynamic brick size and layout based on the user''s usage frequency of the system functions through Huffman coding. The experimental results show that the method of dynamic Metro UI, which is dynamically presented in this research, is more suitable for the user to adjust the operation experience according to the different needs compared with the traditional static and fixed size Metro UI.
APA, Harvard, Vancouver, ISO, and other styles
17

Γρίβας, Απόστολος. "Μελέτη και υλοποίηση αλγορίθμων συμπίεσης." Thesis, 2011. http://nemertes.lis.upatras.gr/jspui/handle/10889/4336.

Full text
Abstract:
Σ΄αυτή τη διπλωματική εργασία μελετάμε κάποιους αλγορίθμους συμπίεσης δεδομένων και τους υλοποιούμε. Αρχικά, αναφέρονται βασικές αρχές της κωδικοποίησης και παρουσιάζεται το μαθηματικό υπόβαθρο της Θεωρίας Πληροφορίας. Παρουσιάζονται, επίσης διάφορα είδη κωδικών. Εν συνεχεία αναλύονται διεξοδικά η κωδικοποίηση Huffman και η αριθμητική κωδικοποίηση. Τέλος, οι δύο προαναφερθείσες κωδικοποιήσεις υλοποιούνται σε υπολογιστή με χρήση γλώσσας προγραμματισμού C και χρησιμοποιούνται για τη συμπίεση αρχείων κειμένου. Τα αρχεία που προκύπτουν συγκρίνονται με αρχεία που έχουν συμπιεστεί με χρήση προγραμμάτων του εμπορίου, αναλύονται τα αίτια των διαφορών στην αποδοτικότητα και εξάγονται χρήσιμα συμπεράσματα.
In this thesis we study some data compression algorithms and implement them. The basic principles of coding are mentioned and the mathematical foundation of information theory is presented. Also different types of codes are presented. Then the Huffman coding and arithmetic coding are analyzed in detail. Finally, the two codings are implemented on a computer using the C programming language in order to compress text files. The resulting files are compared with files that are compressed using commercial programmes, the causes of differences in the efficiency are analyzed and useful conclusions are drawn.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Ruey-Jen, and 王瑞禎. "On the Design and VLSI architecture for Dynamic Huffman Coding." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/98190957224448119235.

Full text
Abstract:
碩士
國立成功大學
電機工程研究所
82
Huffman coding is a lossless data compression technique that achieves compact data representation by taking advantage of the statistical characteristic of the source. It is widely used in many various data compression applications , such as high definition television , disk operation system , video coding , and large data communication ....。 Dynamic huffman coding (DHC) can compress any data file without preview. Compared with the Adaptive huffman coding , the DHC method requires a fewer memories and needs no side informations . Compared with the static huffman coding , the DHC method achieves a better compression ratio。 In this papper , the modified algorithm and the CAM_based architectures for DHC have been presented . The output thoughput of the encoder is 1bit/cycle . Based on the architecture , the DHC encoder chip is implemented . The chip has gate count of 17652 and die area of 4.8mm*4.8mm by using TSMC 0.8um spdm process . From the result of timing analysis , the work frequency is about 20 Mhz。
APA, Harvard, Vancouver, ISO, and other styles
19

Huang, Ya-Chen, and 黃雅臻. "Efficient Test Pattern Compression Techniques Based on Complementary Huffman Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/93893687721312782837.

Full text
Abstract:
碩士
輔仁大學
電子工程學系
97
In this thesis, complementary Huffman encoding techniques are proposed for test data compression of complex SOC designs during manufacturing testing. The correlations of blocks of bits in a test data set are exploited such that more test blocks can share the same codeword. Therefore, besides the compatible blocks used in previous works, the complementary property betweens test blocks can also be used. Based on this property, two algorithms are proposed for Huffman encoding. According to these techniques, more test blocks can share the same codeword and the size of the Huffman tree can be reduced. This will not only reduce the area overhead of the decoding circuitry but also substantially increase the compression ratio. In order to facilitate the proposed complementary encoding techniques, a don’t-care assignment algorithm is also proposed. According to experimental results, the area overhead of the decompression circuit is lower than that of the full Huffman coding technique. Moreover, the compression ratio is higher than that of the selective and optimal selective Huffman coding techniques proposed in previous works.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Chia-Wei, and 劉家維. "A Dynamic Huffman Coding Method for TLC NAND Flash Memory." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ujktq2.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
107
Recently, NAND flash memory has gradually replaced the traditional Hard-Disk Drives and become the most mainstream storage device. NAND flash memory has many advantages such as non-volatile features, small size, low-power consumption, fast-access speed, and shock resistance, etc. With the advance of the process, NAND flash memory has evolved from single-level cell (SLC) and multi-level cell (MLC) into triple-level cell (TLC) or even quad-level cell (QLC). Although NAND flash memory has many advantages, it also has many physical problems such as the characteristic of erase-before-write, the limitation of P/E Cycles, etc. Moreover, TLC NAND flash memory has the problems of low reliability and short lifetime. Thus, we propose a dynamic Huffman coding method, which can apply to the write operation of NAND flash memory. Our method can select a suitable type of Huffman coding for different kinds of data dynamically and improve the VTH distribution of NAND flash memory to reduce the bit-error-rate and improve the reliability of NAND flash memory.
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Che-Lung, and 林哲論. "Bounded Error Huffman Coding in Applications of Wireless Sensor Networks." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/61263794337918009117.

Full text
Abstract:
碩士
國立臺灣大學
工程科學及海洋工程學研究所
102
The measurement error in realistic application for WSN exists due to hardware limitation and application conditions. On the other hand, how to prolong the life time of WSN is an important issue due to the limitation of battery capacity in sensors. In previous research, both Bounded Error Data Compression (BEC) and Improved Bounded Error Data Compression (IBEC), used the bounded error in data compression under the condition of allowing data error to reduce the power consumption in WSN. Unlike BEC and IBEC, Bounded Error Huffman Coding (BEHC) proposed in this thesis uses the bounded error in Huffman coding. In data correlation compression, the compression ratio would be improved by that avoiding the excess bit composing the code and eliminating the defect of compressing the data under bounded error to longer code. In addition, after the research and observation of IBEC in this thesis, it shows that the data format and spatial correlation compression proposed by IBEC still has the defect. Therefore, New Improved Bounded Error Data Compression (NIBEC) which uses BEHC in off-line would be proposed to improve the data format and spatial correlation compression for higher effectiveness of compression. In experiment result, four type raw data which have different correlation would be experimented and the results would compare with IBEC. The result shows that NIBEC improved 27%~47% compression ratio and reduced 25%~43% power consumption, and it proved that NIBEC improved the compression effectively.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Sze Ching, and 陳思靜. "A Line-Based Lossless Display Frame Compressor Using Huffman Coding and Longest Prefix Match Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/75028784547709858315.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
104
We propose a lossless video frame compression algorithm employing a dictionary coding method, the Huffman coding method and three schemes to achieve high compression ratio. We observe the smaller the absolute value of the differentials between the current pixel and its neighbors the higher the probability is by analyzing the distribution of this differentials. According to this distribution, we compute the data reduction ratio (DRR) for cases using different numbers of code words and find the more code words used the higher the DRR which approached a plateau. Considering memory usage, we choose a suitable number of code words for Huffman encoding. We employ a two-staged classification (TC) scheme consisting of the dictionary coding method and a longest prefix match (LPM) method. The LPM method we choose for each pixel group a best truncation length (BTL) using an adaptive prefix bit truncation (APBT) scheme. We further compress the code words by a head code compression (HCC) scheme. Due to large numbers of code words used, we can achieve about 0.5% more bit rate reduction compared to previous proposed algorithm and only 0.96% bit rate reduction less than using the maximum dictionary size.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Chih Han, and 李致翰. "A Framework for EEG Compression with Compressive Sensing and Huffman Coding." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/28431597266494445081.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
103
Compressive sensing (CS) is an emerging technique for data compression in recent years. In this thesis, it is used to compress electroencephalogram (EEG) signals. CS includes two major principles. The one is the sparsity, and the other is incoherence. However, the EEG signal is not sparse enough. Thus, CS can only recover the compressed EEG signals in low compression ratios. Under high compression ratios, the recovery of compressed EEG signals fails after the compression. The compression ratios where EEG can be reconstructed with high quality is not high enough to let the system become energy-efficient, so the compression will be not meaningful. Thus, we want to find a solution to make CS become practical in compressing EEG signals when high compression ratios are adopted. From surveying literatures, the approaches to increase performance in CS can be separated into three classes. First, design a more strong reconstruction algorithm. Second, find a dictionary where the EEG signals can have sparse presentation in such transform domain. Lastly, combine the CS with other compression techniques. Here we take the first and third approaches to achieve the goal. First of all, we proposed a modified iterative pseudo-inverse multiplication (MIPIM) with the complexity O(KMN) where M is the dimension of the measurements, N is the dimension of the signal, and K is the sparse level. This complexity is lower than the most existing algorithms. Next, we extend MIPIM into a multiple measurements (MMV) algorithm. It is called as simultaneously MIPIM (SMIPIM). This aims at recovering all channel signals at the same time and taking the correlation among channels to increase performance. The SMIPIM can reduce normalized mean square error (NMSE) by 0.06 comparing with the classical algorithms in CS. For the part of combining the CS with other compression techniques, we adopt an existing framework which takes an information from server or receiver node to combine CS and Huffman coding efficiently. The framework was proposed to increase the compression to apply to the telemedicine with EEG signals, but we found a shortcoming. It takes a long computational time on running the algorithm which produces information. It will make the instant telemedicine unavailable because sensors can not transmit data until the information are received. Therefore, we propose an algorithm to replace the existing one. The complexity changes from O(L^5) to O(L^2) where L is the number of channels. In our experiment, our algorithm is faster 10^5 times than the existing one. Finally, we carried out the simulation of entire system. We simulated the framework with our proposed algorithm for computing the information of correlation of channels and our SMIPIM for reconstruction. In a compression ratio 3 : 1, the NMSE is 0.0672 , and the original CS framework with Block Sparse Bayesian Learning Bound Optimization (BSBLBO) is 0.1554. On the other hand, depending on the minimum acceptable NMSE which is 0.09 for EEG signals, we have a compression ratio 0.31. Moreover, we take the compression ratio to estimate how many channels we can transmit in a fixed transmission bandwidth. The result shows that the number of channels can increase 16 with Bluetooth 2.0 and 35 with ZigBee for wireless transmission after the work.
APA, Harvard, Vancouver, ISO, and other styles
24

Tung, Chi-Yang, and 董啟揚. "A New Method of Image Compression by Wavelet Combining Huffman Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/97960665547728924728.

Full text
Abstract:
碩士
中原大學
電機工程研究所
104
This study proposes a new method of image compression by wavelet combining Huffman coding in order to reduce storage space, increase transmission speed and make the image quality better. In this thesis, we propose a new method of image compression by introducing Huffman coding. First, we implement image compression by wavelet transform. In this study the wavelet transform is used as our original case and the wavelet combining Huffman coding is as our improvement case. Second, we make the image after the wave let transform to be encode by Huffman coding. Third, we make simulation of image compression by MATLAB. Then we compress color images and gray-level images to calculate the quality of compressed images by PSNR (peak signal-to-noise ratio) value. According to our simulations, the performance of wavelet combining Huffman coding will be significantly better than wavelet transform. The performance also achieves our ideal requirement. In this study, the results of our research are as follows: 1.Reduce Storage Space In this study, we use Huffman coding to encode the image has compressed. It can reduce storage space by using wavelet transform combining Huffman coding. 2.Increase Transmission Speed Due to using Huffman coding to encode the image, we can figure out the file of image is comparatively small than image file of original case. If file image is more small, it is beneficial to increase transmission speed. 3.make the image quality better In this study, we calculate the quality of compressed images by PSNR (peak signal-to-noise ratio) value. So we can know not only image compressed smaller but also better quality
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Jyun-Ruei, and 李俊叡. "The Study of RFID Authentication Schemes Using Message Authentication and Huffman Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/98965119282757619915.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
97
As the RFID technology becomes mature and its manufacturing cost is reduced constantly, this technology has already widely used in a lot of field such as supply chain management, entrance guard system, intelligent home appliances, electronic payment, production automation, etc. While RFID technology brings enormous commercial value, and the usage is simple and convenient, it threatens security and privacy of individuals and organizations. In this thesis, we introduce the problem in privacy and security. And then we proposed a new idea. We use Huffman coding to encode the tag ID. And we use the hash function to augment the data security. Our scheme provides each RFID tag to emit a pseudonym when receiving every reader's query. Therefore, it makes tracking activities and personal preferences of tag's owner impractical to provide the user's privacy. In addition, our proposed scheme provides not only high-security but also high-e±ciency.
APA, Harvard, Vancouver, ISO, and other styles
26

Hussain, Fatima Omman. "Indirect text entry interfaces based on Huffman coding with unequal letter costs /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2008. Graduate Programme in Science.
Typescript. Includes bibliographical references (leaves 223-232). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965
APA, Harvard, Vancouver, ISO, and other styles
27

Syu, Wei-Jyun, and 許瑋峻. "A Study of Reversible Data Hiding Based on SMVQ and Huffman Coding." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/5uj776.

Full text
Abstract:
碩士
國立虎尾科技大學
資訊工程研究所
101
The data hiding technology not only embed the high-payload secret data into digital image, but also could reconstruct the original cover image at the receiver. “A reversible and high-payload data hiding scheme implemented in the SMVQ compression domain of image” is proposed in this thesis. The idea of this scheme is to hide secret data into the compression codes of image by utilizing the sorted state codebook of SMVQ. The compression codes of image reversibly reconstructed original VQ-compressed cover image in the proposed scheme. Besides, the Huffman coding technique is applied to compact the volume of the overall data needed to be transmitted. The proposed scheme significantly enhances VQ-compressed technique and achieves high embedding capacity. Eventually, the experimental results show that the proposed scheme maintains good visual quality for the reconstructed original VQ-compressed cover image. Moreover, the proposed scheme achieves the best performance among approach in literature with the average low bit rate and high embedding rate.
APA, Harvard, Vancouver, ISO, and other styles
28

Chung, Wei-Sheng, and 鍾偉勝. "Proof of Violation with Adaptive Huffman Coding Hash Tree for Cloud Storage Service." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/j622gu.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系
106
Although cloud storage services are very popular nowadays, users have a problem that they do not have an effective way to prove the system is abnormal due to system errors. Users thus cannot claim a loss even when data or files are damaged by some kinds of internal errors. As a result, enterprise users often do not trust or even adopt cloud storage services due to the above-mentioned problem. We intend to design methods to solve the problem of cloud storage services. In this paper, we focus on the research of Proof of Violation (POV). All the updated files in cloud storage will be signed by users and service providers with digital signature, and their hash values are checked for detecting the occurrence of violations to ensure that both parties can trust each other. We propose Adaptive Huffman Coding Hash Tree Construction (AHCHTC) algorithm for the real-time POV of cloud storage services. The proposed algorithm dynamically adds and adjusts hash tree nodes according to the update counters of files. It consumes less execution time and memory space than an existing hash tree-based POV scheme. We further propose Adaptive Huffman Coding Hash Tree Construction/Counter Adjustment (AHCHTC/CA) algorithm to improve the AHCHTC algorithm by adjusting counters of all nodes associated with files while maintaining the hash tree structure to satisfy the sibling property. Thus, the AHCHTC/CA algorithm constructs the hash tree according to recent update counters, instead of total update counters. This can reflect recent file update patterns, and thus further improve the performance. Simulation experiments are conducted to evaluate the performance of the proposed algorithms is for the web page update patterns in NCUCCWiki and the file update patterns in the Harvard University network file system provided by SNIA (Storage Networking Industry Association) IOTTA (Input/output Traces, Tools, and Analysis) data set. The evaluated performance is compared with that of a related method. The comparisons show that the proposed algorithms are superior to the related method in terms of the computation time and the memory overhead. We also show some observations for the experimental results and possible application scenarios of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
29

Haung, Chan-Hao, and 黃展浩. "Improving the input speed of a multi-key prosthetic keyboard based on Huffman coding." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/11577293147455365349.

Full text
Abstract:
碩士
國立暨南國際大學
資訊工程學系
99
In a conventional keyboard, like a QWERTY keyboard, there are too many keys such that the spaces between neighboring keys are too small for physical disabled. In this study we propose a novel prosthetic keyboard with reduced number of keys such that the space between neighboring keys is sufficient for physically disabled. Given only 12 keys in the designed keyboard, multiple keystrokes are required for inputting a character. Each character in encoded by using radix-12 Huffman algorithm. The code set of each character is determined by its appearance frequency in a typical typing task. The higher appearance frequency of a character, the shorter its code set. Experiments with a subject with cerebral palsy showed that the average code length of all characters is 1.48 keys per character. Given the codes sets, this study further propose a method to find the optimal keyboard arrangement using Particle Swarm Optimization (PSO) algorithm. Given the appearance frequency of each key in a typical typing task, the objective function is based on the total time required for the subject to press the keys. The optimal keyboard arrangement is one that minimizes the objective function using PSO algorithm. Experiments were conducted to compare the performances of three different input methods, including the proposed Huffman method, the dual key method, and a 6-key Mose Code method. The Mose Code input method has been used by the subject for years. A commonly-used typing speed test software was used to record the typing speed of the subject. Results showed that the proposed Huffman method can help the subject to achieve more words per minutes than other two methods.
APA, Harvard, Vancouver, ISO, and other styles
30

HUANG, HSIN-HAN, and 黃信翰. "Hardware Implementation of a Real Time Lossless ECG Signal Compressor with Improved Multi-Stage Huffman Coding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/58420664973565933749.

Full text
Abstract:
碩士
輔仁大學
電機工程學系碩士班
105
Electrocardiogram (ECG) monitoring systems are widely used in healthcare and telemedicine. The ECG signals must be compressed to enable efficient transmission and storage. In addition, real time monitoring is required. It is challenging to meet real time requirements and transmission bandwidth limit. In this paper, we propose hardware implementation of a real time lossless ECG signal compressor. Modified error predictor and multi-stage Huffman encoding algorithm are proposed. Without sacrificing hardware cost, we can use a two-stage encoding tables to realize multi-stage encoding, which has better compression efficiency. We implemented the lossless compressor hardware on an ARM-based FPGA platform. Experiments to evaluate MIT-BIH database show that the proposed work attain comparable compression performance and allow the real time data transmission under Bluetooth environment.
APA, Harvard, Vancouver, ISO, and other styles
31

Lai, Po-Yueh, and 賴柏岳. "Opus 編碼器中 Range Encoding 與 Huffman Coding 壓縮效率之比較." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2rrkgd.

Full text
Abstract:
碩士
國立臺北科技大學
資訊工程系所
105
Nowadays, streaming is the particular way to listen to the digital music online. People use the MP3 and AAC format in the past but the MP3 format is retired gradually in recent. Now there are lots of digital audio format in streaming technology and one of them is Opus Codec.   In this thesis, we study the CELT Layer in Opus Codec. Use the Huffman Coding in MP3 and AAC to replace the original method PVQ and Range Encoding in CELT. Through this experiment, we can know the compression efficiency between the Range Encoding and Huffman Coding.   We let this experiment separate into two parts. First, is obtaining the data from the source file. Second, code these data in MP3’s and AAC’s Huffman Coding method respectively and compare this method’s difference with the original method.
APA, Harvard, Vancouver, ISO, and other styles
32

Corapcioglu, Ahmet. "Reduction in bandwidth and buffer size by using modified Huffman coding after dropping the less frequent source symbols." Thesis, 1987. http://hdl.handle.net/10945/22451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography