Dissertations / Theses on the topic 'Huffman coding'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 32 dissertations / theses for your research on the topic 'Huffman coding.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Kilic, Suha. "Modification of Huffman Coding." Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21449.
Full textZou, Xin. "Compression and Decompression of Color MRI Image by Huffman Coding." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17029.
Full textMachado, Lennon de Almeida. "Busca indexada de padrões em textos comprimidos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-09062010-222653/.
Full textPattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
Kailasanathan, Chandrapal. "Securing digital images." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.150935/index.html.
Full textLúdik, Michal. "Porovnání hlasových a audio kodeků." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219793.
Full textFriedrich, Tomáš. "Komprese DNA sekvencí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237222.
Full textRománek, Karel. "Nezávislý datalogger s USB připojením." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219113.
Full textHad, Filip. "Komprese signálů EKG nasnímaných pomocí mobilního zařízení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316832.
Full textKrejčí, Michal. "Komprese dat." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217934.
Full textOndra, Josef. "Komprese signálů EKG s využitím vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217209.
Full textGrigoli, Francesco. "Studio dei codici, trasmissione e correzione efficiente di un messaggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20965/.
Full textŠtys, Jiří. "Implementace statistických kompresních metod." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-413295.
Full textDvořák, Martin. "Výukový video kodek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219882.
Full textChang, Chih-Peng, and 張志鵬. "Segmented Vertex Chain Coding with Huffman Coding." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/56936669407017461858.
Full text朝陽科技大學
資訊工程系碩士班
96
To significantly decrease the amount of information while still preserving the contour shape, chain coding is widely applied to digital images analysis, especially to those raster-shaped ones. In this paper, chain coding is integrated with the Single-side Grown Huffman Table (SGHT) to improve the data compression rate.
Baltaji, Najad Borhan. "Scan test data compression using alternate Huffman coding." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5615.
Full texttext
Zheng, Li-Wen, and 鄭力文. "Personalize metro-style user interface by Huffman coding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/rdxj52.
Full text國立臺灣大學
工程科學及海洋工程學研究所
105
In order to satisfy the needs of visual information for user interface, Microsoft proposed the metro UI design with its dynamic bricks to show the importance of the various functions in operating system and got lots of attention, so that many websites have begun to follow this concept designing their user interface. This research uses this concept to build the new user interface with the Web portal and dynamically present usage requirements. However, the size of the current dynamic brick and the layout are required to be set in advance, and it has no effective method for automatic calculation. In this study, an automated Metro UI user interface algorithm is proposed to automatically calculate the dynamic brick size and layout based on the user''s usage frequency of the system functions through Huffman coding. The experimental results show that the method of dynamic Metro UI, which is dynamically presented in this research, is more suitable for the user to adjust the operation experience according to the different needs compared with the traditional static and fixed size Metro UI.
Γρίβας, Απόστολος. "Μελέτη και υλοποίηση αλγορίθμων συμπίεσης." Thesis, 2011. http://nemertes.lis.upatras.gr/jspui/handle/10889/4336.
Full textIn this thesis we study some data compression algorithms and implement them. The basic principles of coding are mentioned and the mathematical foundation of information theory is presented. Also different types of codes are presented. Then the Huffman coding and arithmetic coding are analyzed in detail. Finally, the two codings are implemented on a computer using the C programming language in order to compress text files. The resulting files are compared with files that are compressed using commercial programmes, the causes of differences in the efficiency are analyzed and useful conclusions are drawn.
Wang, Ruey-Jen, and 王瑞禎. "On the Design and VLSI architecture for Dynamic Huffman Coding." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/98190957224448119235.
Full text國立成功大學
電機工程研究所
82
Huffman coding is a lossless data compression technique that achieves compact data representation by taking advantage of the statistical characteristic of the source. It is widely used in many various data compression applications , such as high definition television , disk operation system , video coding , and large data communication ....。 Dynamic huffman coding (DHC) can compress any data file without preview. Compared with the Adaptive huffman coding , the DHC method requires a fewer memories and needs no side informations . Compared with the static huffman coding , the DHC method achieves a better compression ratio。 In this papper , the modified algorithm and the CAM_based architectures for DHC have been presented . The output thoughput of the encoder is 1bit/cycle . Based on the architecture , the DHC encoder chip is implemented . The chip has gate count of 17652 and die area of 4.8mm*4.8mm by using TSMC 0.8um spdm process . From the result of timing analysis , the work frequency is about 20 Mhz。
Huang, Ya-Chen, and 黃雅臻. "Efficient Test Pattern Compression Techniques Based on Complementary Huffman Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/93893687721312782837.
Full text輔仁大學
電子工程學系
97
In this thesis, complementary Huffman encoding techniques are proposed for test data compression of complex SOC designs during manufacturing testing. The correlations of blocks of bits in a test data set are exploited such that more test blocks can share the same codeword. Therefore, besides the compatible blocks used in previous works, the complementary property betweens test blocks can also be used. Based on this property, two algorithms are proposed for Huffman encoding. According to these techniques, more test blocks can share the same codeword and the size of the Huffman tree can be reduced. This will not only reduce the area overhead of the decoding circuitry but also substantially increase the compression ratio. In order to facilitate the proposed complementary encoding techniques, a don’t-care assignment algorithm is also proposed. According to experimental results, the area overhead of the decompression circuit is lower than that of the full Huffman coding technique. Moreover, the compression ratio is higher than that of the selective and optimal selective Huffman coding techniques proposed in previous works.
Liu, Chia-Wei, and 劉家維. "A Dynamic Huffman Coding Method for TLC NAND Flash Memory." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ujktq2.
Full text國立臺灣科技大學
電子工程系
107
Recently, NAND flash memory has gradually replaced the traditional Hard-Disk Drives and become the most mainstream storage device. NAND flash memory has many advantages such as non-volatile features, small size, low-power consumption, fast-access speed, and shock resistance, etc. With the advance of the process, NAND flash memory has evolved from single-level cell (SLC) and multi-level cell (MLC) into triple-level cell (TLC) or even quad-level cell (QLC). Although NAND flash memory has many advantages, it also has many physical problems such as the characteristic of erase-before-write, the limitation of P/E Cycles, etc. Moreover, TLC NAND flash memory has the problems of low reliability and short lifetime. Thus, we propose a dynamic Huffman coding method, which can apply to the write operation of NAND flash memory. Our method can select a suitable type of Huffman coding for different kinds of data dynamically and improve the VTH distribution of NAND flash memory to reduce the bit-error-rate and improve the reliability of NAND flash memory.
Lin, Che-Lung, and 林哲論. "Bounded Error Huffman Coding in Applications of Wireless Sensor Networks." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/61263794337918009117.
Full text國立臺灣大學
工程科學及海洋工程學研究所
102
The measurement error in realistic application for WSN exists due to hardware limitation and application conditions. On the other hand, how to prolong the life time of WSN is an important issue due to the limitation of battery capacity in sensors. In previous research, both Bounded Error Data Compression (BEC) and Improved Bounded Error Data Compression (IBEC), used the bounded error in data compression under the condition of allowing data error to reduce the power consumption in WSN. Unlike BEC and IBEC, Bounded Error Huffman Coding (BEHC) proposed in this thesis uses the bounded error in Huffman coding. In data correlation compression, the compression ratio would be improved by that avoiding the excess bit composing the code and eliminating the defect of compressing the data under bounded error to longer code. In addition, after the research and observation of IBEC in this thesis, it shows that the data format and spatial correlation compression proposed by IBEC still has the defect. Therefore, New Improved Bounded Error Data Compression (NIBEC) which uses BEHC in off-line would be proposed to improve the data format and spatial correlation compression for higher effectiveness of compression. In experiment result, four type raw data which have different correlation would be experimented and the results would compare with IBEC. The result shows that NIBEC improved 27%~47% compression ratio and reduced 25%~43% power consumption, and it proved that NIBEC improved the compression effectively.
Chen, Sze Ching, and 陳思靜. "A Line-Based Lossless Display Frame Compressor Using Huffman Coding and Longest Prefix Match Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/75028784547709858315.
Full text國立清華大學
資訊工程學系
104
We propose a lossless video frame compression algorithm employing a dictionary coding method, the Huffman coding method and three schemes to achieve high compression ratio. We observe the smaller the absolute value of the differentials between the current pixel and its neighbors the higher the probability is by analyzing the distribution of this differentials. According to this distribution, we compute the data reduction ratio (DRR) for cases using different numbers of code words and find the more code words used the higher the DRR which approached a plateau. Considering memory usage, we choose a suitable number of code words for Huffman encoding. We employ a two-staged classification (TC) scheme consisting of the dictionary coding method and a longest prefix match (LPM) method. The LPM method we choose for each pixel group a best truncation length (BTL) using an adaptive prefix bit truncation (APBT) scheme. We further compress the code words by a head code compression (HCC) scheme. Due to large numbers of code words used, we can achieve about 0.5% more bit rate reduction compared to previous proposed algorithm and only 0.96% bit rate reduction less than using the maximum dictionary size.
Li, Chih Han, and 李致翰. "A Framework for EEG Compression with Compressive Sensing and Huffman Coding." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/28431597266494445081.
Full text國立清華大學
電機工程學系
103
Compressive sensing (CS) is an emerging technique for data compression in recent years. In this thesis, it is used to compress electroencephalogram (EEG) signals. CS includes two major principles. The one is the sparsity, and the other is incoherence. However, the EEG signal is not sparse enough. Thus, CS can only recover the compressed EEG signals in low compression ratios. Under high compression ratios, the recovery of compressed EEG signals fails after the compression. The compression ratios where EEG can be reconstructed with high quality is not high enough to let the system become energy-efficient, so the compression will be not meaningful. Thus, we want to find a solution to make CS become practical in compressing EEG signals when high compression ratios are adopted. From surveying literatures, the approaches to increase performance in CS can be separated into three classes. First, design a more strong reconstruction algorithm. Second, find a dictionary where the EEG signals can have sparse presentation in such transform domain. Lastly, combine the CS with other compression techniques. Here we take the first and third approaches to achieve the goal. First of all, we proposed a modified iterative pseudo-inverse multiplication (MIPIM) with the complexity O(KMN) where M is the dimension of the measurements, N is the dimension of the signal, and K is the sparse level. This complexity is lower than the most existing algorithms. Next, we extend MIPIM into a multiple measurements (MMV) algorithm. It is called as simultaneously MIPIM (SMIPIM). This aims at recovering all channel signals at the same time and taking the correlation among channels to increase performance. The SMIPIM can reduce normalized mean square error (NMSE) by 0.06 comparing with the classical algorithms in CS. For the part of combining the CS with other compression techniques, we adopt an existing framework which takes an information from server or receiver node to combine CS and Huffman coding efficiently. The framework was proposed to increase the compression to apply to the telemedicine with EEG signals, but we found a shortcoming. It takes a long computational time on running the algorithm which produces information. It will make the instant telemedicine unavailable because sensors can not transmit data until the information are received. Therefore, we propose an algorithm to replace the existing one. The complexity changes from O(L^5) to O(L^2) where L is the number of channels. In our experiment, our algorithm is faster 10^5 times than the existing one. Finally, we carried out the simulation of entire system. We simulated the framework with our proposed algorithm for computing the information of correlation of channels and our SMIPIM for reconstruction. In a compression ratio 3 : 1, the NMSE is 0.0672 , and the original CS framework with Block Sparse Bayesian Learning Bound Optimization (BSBLBO) is 0.1554. On the other hand, depending on the minimum acceptable NMSE which is 0.09 for EEG signals, we have a compression ratio 0.31. Moreover, we take the compression ratio to estimate how many channels we can transmit in a fixed transmission bandwidth. The result shows that the number of channels can increase 16 with Bluetooth 2.0 and 35 with ZigBee for wireless transmission after the work.
Tung, Chi-Yang, and 董啟揚. "A New Method of Image Compression by Wavelet Combining Huffman Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/97960665547728924728.
Full text中原大學
電機工程研究所
104
This study proposes a new method of image compression by wavelet combining Huffman coding in order to reduce storage space, increase transmission speed and make the image quality better. In this thesis, we propose a new method of image compression by introducing Huffman coding. First, we implement image compression by wavelet transform. In this study the wavelet transform is used as our original case and the wavelet combining Huffman coding is as our improvement case. Second, we make the image after the wave let transform to be encode by Huffman coding. Third, we make simulation of image compression by MATLAB. Then we compress color images and gray-level images to calculate the quality of compressed images by PSNR (peak signal-to-noise ratio) value. According to our simulations, the performance of wavelet combining Huffman coding will be significantly better than wavelet transform. The performance also achieves our ideal requirement. In this study, the results of our research are as follows: 1.Reduce Storage Space In this study, we use Huffman coding to encode the image has compressed. It can reduce storage space by using wavelet transform combining Huffman coding. 2.Increase Transmission Speed Due to using Huffman coding to encode the image, we can figure out the file of image is comparatively small than image file of original case. If file image is more small, it is beneficial to increase transmission speed. 3.make the image quality better In this study, we calculate the quality of compressed images by PSNR (peak signal-to-noise ratio) value. So we can know not only image compressed smaller but also better quality
Li, Jyun-Ruei, and 李俊叡. "The Study of RFID Authentication Schemes Using Message Authentication and Huffman Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/98965119282757619915.
Full text亞洲大學
資訊工程學系碩士班
97
As the RFID technology becomes mature and its manufacturing cost is reduced constantly, this technology has already widely used in a lot of field such as supply chain management, entrance guard system, intelligent home appliances, electronic payment, production automation, etc. While RFID technology brings enormous commercial value, and the usage is simple and convenient, it threatens security and privacy of individuals and organizations. In this thesis, we introduce the problem in privacy and security. And then we proposed a new idea. We use Huffman coding to encode the tag ID. And we use the hash function to augment the data security. Our scheme provides each RFID tag to emit a pseudonym when receiving every reader's query. Therefore, it makes tracking activities and personal preferences of tag's owner impractical to provide the user's privacy. In addition, our proposed scheme provides not only high-security but also high-e±ciency.
Hussain, Fatima Omman. "Indirect text entry interfaces based on Huffman coding with unequal letter costs /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965.
Full textTypescript. Includes bibliographical references (leaves 223-232). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965
Syu, Wei-Jyun, and 許瑋峻. "A Study of Reversible Data Hiding Based on SMVQ and Huffman Coding." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/5uj776.
Full text國立虎尾科技大學
資訊工程研究所
101
The data hiding technology not only embed the high-payload secret data into digital image, but also could reconstruct the original cover image at the receiver. “A reversible and high-payload data hiding scheme implemented in the SMVQ compression domain of image” is proposed in this thesis. The idea of this scheme is to hide secret data into the compression codes of image by utilizing the sorted state codebook of SMVQ. The compression codes of image reversibly reconstructed original VQ-compressed cover image in the proposed scheme. Besides, the Huffman coding technique is applied to compact the volume of the overall data needed to be transmitted. The proposed scheme significantly enhances VQ-compressed technique and achieves high embedding capacity. Eventually, the experimental results show that the proposed scheme maintains good visual quality for the reconstructed original VQ-compressed cover image. Moreover, the proposed scheme achieves the best performance among approach in literature with the average low bit rate and high embedding rate.
Chung, Wei-Sheng, and 鍾偉勝. "Proof of Violation with Adaptive Huffman Coding Hash Tree for Cloud Storage Service." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/j622gu.
Full text國立中央大學
資訊工程學系
106
Although cloud storage services are very popular nowadays, users have a problem that they do not have an effective way to prove the system is abnormal due to system errors. Users thus cannot claim a loss even when data or files are damaged by some kinds of internal errors. As a result, enterprise users often do not trust or even adopt cloud storage services due to the above-mentioned problem. We intend to design methods to solve the problem of cloud storage services. In this paper, we focus on the research of Proof of Violation (POV). All the updated files in cloud storage will be signed by users and service providers with digital signature, and their hash values are checked for detecting the occurrence of violations to ensure that both parties can trust each other. We propose Adaptive Huffman Coding Hash Tree Construction (AHCHTC) algorithm for the real-time POV of cloud storage services. The proposed algorithm dynamically adds and adjusts hash tree nodes according to the update counters of files. It consumes less execution time and memory space than an existing hash tree-based POV scheme. We further propose Adaptive Huffman Coding Hash Tree Construction/Counter Adjustment (AHCHTC/CA) algorithm to improve the AHCHTC algorithm by adjusting counters of all nodes associated with files while maintaining the hash tree structure to satisfy the sibling property. Thus, the AHCHTC/CA algorithm constructs the hash tree according to recent update counters, instead of total update counters. This can reflect recent file update patterns, and thus further improve the performance. Simulation experiments are conducted to evaluate the performance of the proposed algorithms is for the web page update patterns in NCUCCWiki and the file update patterns in the Harvard University network file system provided by SNIA (Storage Networking Industry Association) IOTTA (Input/output Traces, Tools, and Analysis) data set. The evaluated performance is compared with that of a related method. The comparisons show that the proposed algorithms are superior to the related method in terms of the computation time and the memory overhead. We also show some observations for the experimental results and possible application scenarios of the proposed algorithms.
Haung, Chan-Hao, and 黃展浩. "Improving the input speed of a multi-key prosthetic keyboard based on Huffman coding." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/11577293147455365349.
Full text國立暨南國際大學
資訊工程學系
99
In a conventional keyboard, like a QWERTY keyboard, there are too many keys such that the spaces between neighboring keys are too small for physical disabled. In this study we propose a novel prosthetic keyboard with reduced number of keys such that the space between neighboring keys is sufficient for physically disabled. Given only 12 keys in the designed keyboard, multiple keystrokes are required for inputting a character. Each character in encoded by using radix-12 Huffman algorithm. The code set of each character is determined by its appearance frequency in a typical typing task. The higher appearance frequency of a character, the shorter its code set. Experiments with a subject with cerebral palsy showed that the average code length of all characters is 1.48 keys per character. Given the codes sets, this study further propose a method to find the optimal keyboard arrangement using Particle Swarm Optimization (PSO) algorithm. Given the appearance frequency of each key in a typical typing task, the objective function is based on the total time required for the subject to press the keys. The optimal keyboard arrangement is one that minimizes the objective function using PSO algorithm. Experiments were conducted to compare the performances of three different input methods, including the proposed Huffman method, the dual key method, and a 6-key Mose Code method. The Mose Code input method has been used by the subject for years. A commonly-used typing speed test software was used to record the typing speed of the subject. Results showed that the proposed Huffman method can help the subject to achieve more words per minutes than other two methods.
HUANG, HSIN-HAN, and 黃信翰. "Hardware Implementation of a Real Time Lossless ECG Signal Compressor with Improved Multi-Stage Huffman Coding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/58420664973565933749.
Full text輔仁大學
電機工程學系碩士班
105
Electrocardiogram (ECG) monitoring systems are widely used in healthcare and telemedicine. The ECG signals must be compressed to enable efficient transmission and storage. In addition, real time monitoring is required. It is challenging to meet real time requirements and transmission bandwidth limit. In this paper, we propose hardware implementation of a real time lossless ECG signal compressor. Modified error predictor and multi-stage Huffman encoding algorithm are proposed. Without sacrificing hardware cost, we can use a two-stage encoding tables to realize multi-stage encoding, which has better compression efficiency. We implemented the lossless compressor hardware on an ARM-based FPGA platform. Experiments to evaluate MIT-BIH database show that the proposed work attain comparable compression performance and allow the real time data transmission under Bluetooth environment.
Lai, Po-Yueh, and 賴柏岳. "Opus 編碼器中 Range Encoding 與 Huffman Coding 壓縮效率之比較." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2rrkgd.
Full text國立臺北科技大學
資訊工程系所
105
Nowadays, streaming is the particular way to listen to the digital music online. People use the MP3 and AAC format in the past but the MP3 format is retired gradually in recent. Now there are lots of digital audio format in streaming technology and one of them is Opus Codec. In this thesis, we study the CELT Layer in Opus Codec. Use the Huffman Coding in MP3 and AAC to replace the original method PVQ and Range Encoding in CELT. Through this experiment, we can know the compression efficiency between the Range Encoding and Huffman Coding. We let this experiment separate into two parts. First, is obtaining the data from the source file. Second, code these data in MP3’s and AAC’s Huffman Coding method respectively and compare this method’s difference with the original method.
Corapcioglu, Ahmet. "Reduction in bandwidth and buffer size by using modified Huffman coding after dropping the less frequent source symbols." Thesis, 1987. http://hdl.handle.net/10945/22451.
Full text