To see the other types of publications on this topic, follow the link: Huffman’s algorithm.

Dissertations / Theses on the topic 'Huffman’s algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'Huffman’s algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Хованська, Т. А. "Проблеми створення і стиснення великих інформаційних сховищ і складів даних." Master's thesis, Сумський державний університет, 2019. http://essuir.sumdu.edu.ua/handle/123456789/76429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Devulapalli, Venkata Lakshmi Narasimha. "Application of Huffman Data Compression Algorithm in Hashing Computation." TopSCHOLAR®, 2018. https://digitalcommons.wku.edu/theses/2614.

Full text
Abstract:
Cryptography is the art of protecting information by encrypting the original message into an unreadable format. A cryptographic hash function is a hash function which takes an arbitrary length of the text message as input and converts that text into a fixed length of encrypted characters which is infeasible to invert. The values returned by the hash function are called as the message digest or simply hash values. Because of its versatility, hash functions are used in many applications such as message authentication, digital signatures, and password hashing [Thomsen and Knudsen, 2005]. The purpose of this study is to apply Huffman data compression algorithm to the SHA-1 hash function in cryptography. Huffman data compression algorithm is an optimal compression or prefix algorithm where the frequencies of the letters are used to compress the data [Huffman, 1952]. An integrated approach is applied to achieve new compressed hash function by integrating Huffman compressed codes in the core functionality of hashing computation of the original hash function.
APA, Harvard, Vancouver, ISO, and other styles
3

Maciel, Marcos Costa. "Compressão de dados ambientais em redes de sensores sem fio usando código de Huffman." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/506.

Full text
Abstract:
Fundação do Amparo à Pesquisa do Estado do Amazonas (FAPEAM)
Nesta dissertação de mestrado é apresentada uma proposta de um método simples de compressão de dados sem perda para Redes de Sensores sem Fio (RSSF). Este método é baseado numa codificação Huffman convencional aplicada a um conjunto de amostras de parâmetros monitorados que possuam uma forte correlação temporal, fazendo com que seja gerado um dicionário Huffman a partir dessas probabilidades e que possam ser utilizadas em outros conjuntos de parâmetros de mesma característica. Os resultados de simulação usando temperatura e umidade relativa mostram que este método supera alguns dos mais populares mecanismos de compressão projetados especificamente para RSSF.
In this masters thesis we present a lightweight lossless data compression method for wireless sensor networks(WSN). This method is based on a conventional Huffman coding applied to a sample set of monitored parameters that have a strong temporal correlation, so that a Huffman dictionary is generated from these probabilities, and which may be used in other sets of parameters with same characteristic. Simulations results using temperature and relative humidity measurements show that the proposed method outperforms popular compression mechanisms designed specifically for wireless sensor networks.
APA, Harvard, Vancouver, ISO, and other styles
4

Friedrich, Tomáš. "Komprese DNA sekvencí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237222.

Full text
Abstract:
The increasing volume of biological data requires finding new ways to save these data in genetic banks. The target of this work is design and implementation of a novel algorithm for compression of DNA sequences. The algorithm is based on aligning DNA sequences agains a reference sequence and storing only diferencies between sequence and reference model. The work contains basic prerequisities from molecular biology which are needed for understanding of algorithm details. Next aligment algorithms and common compress schemes suitable for storing of diferencies agains reference sequence are described. The work continues with a description of implementation, which is follewed by derivation of time and space complexity and comparison with common compression algorithms. Further continuation of this thesis is discussed in conclusion.
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Thu Anh. "An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space Structure." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc849728/.

Full text
Abstract:
This thesis is an exloration and exposition of a highly efficient shallow neural network algorithm called word2vec, which was developed by T. Mikolov et al. in order to create vector representations of a language vocabulary such that information about the meaning and usage of the vocabulary words is encoded in the vector space structure. Chapter 1 introduces natural language processing, vector representations of language vocabularies, and the word2vec algorithm. Chapter 2 reviews the basic mathematical theory of deterministic convex optimization. Chapter 3 provides background on some concepts from computer science that are used in the word2vec algorithm: Huffman trees, neural networks, and binary cross-entropy. Chapter 4 provides a detailed discussion of the word2vec algorithm itself and includes a discussion of continuous bag of words, skip-gram, hierarchical softmax, and negative sampling. Finally, Chapter 5 explores some applications of vector representations: word categorization, analogy completion, and language translation assistance.
APA, Harvard, Vancouver, ISO, and other styles
6

Dvořák, Martin. "Výukový video kodek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219882.

Full text
Abstract:
The first goal of diploma thesis is to study the basic principles of video signal compression. Introduction to techniques used to reduce irrelevancy and redundancy in the video signal. The second goal is, on the basis of information about compression tools, implement the individual compression tools in the programming environment of Matlab and assemble simple model of the video codec. Diploma thesis contains a description of the three basic blocks, namely - interframe coding, intraframe coding and coding with variable length word - according the standard MPEG-2.
APA, Harvard, Vancouver, ISO, and other styles
7

陳宏綜. "A Memory-Efficient and Fast Huffman Decoding Algorithm." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/64740387702559843977.

Full text
Abstract:
碩士
國立臺灣科技大學
管理研究所資訊管理學程
87
To reduce the memory size and speed up the process of searching for a symbol in a Huffman tree, we propose a memory-efficient array data structure to represent the Huffman tree. Then, we present a fast Huffman decoding algorithm, which takes O(mlog n) time and uses 3n/2 + n/2logn + 1 memory space, where n is the number of symbols in a Huffman tree.
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Yi-Kai, and 林義凱. "A Space-Efficient Huffman Decoding Algorithm and its Parallelism." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/51208485736616000258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Yih-Kai, and 林義凱. "A Space-Efficient Huffman Decoding Algorithm and its Parallelism." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/72819908164921559975.

Full text
Abstract:
碩士
國立台灣工業技術學院
管理技術研究所
85
This paper first transforms the Huffman treeinto a single--side growing Huffman tree, then presents a memory--efficientdata structure to represent the single--side growing Huffman tree, whichrequires $O((n+d)\lceil \log_{2} n \rceil)$--bits memory space, where $n$ isthenumber of source symbols and $d$ is the depth of the Huffman tree.Based on the proposed data structure, we present an$O(d)$--timeHuffman decoding algorithm.Using the sameexample, thememory required in our decoding algorithm is much less than that ofHashemian in 1995. We finally modify our proposed data structure slightly todesign an$O(1)$--time parallel Huffman decoding algorithmon a concurrent read exclusive write parallel random--access machine(CREW PRAM) using $d$ processors.
APA, Harvard, Vancouver, ISO, and other styles
10

林垂慶. "Huffman Codec Design Based H.263+ Video Encryption Algorithms." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/56514332526578049996.

Full text
Abstract:
碩士
國立暨南國際大學
資訊工程學系
92
With the advancement of science and technology, the transmission of digital video data becomes more and more popular. Such as video conference system and online pay-TV which may be grabbed by a hacker. Hence, in order to prevent data piracy and plagiarism, the encryption of multimedia data becomes an extremely important issue. In this paper, we propose an efficient encryption framework which wouldn’t influence seriously the coding speed and compression efficiency of the original codec system. Since, no matter while employing bit-scrambling techniques on the data in the time domain or frequency domain, which would influence the performance of the codec system seriously. Besides, the first method of the compression domain based encryption algorithms proposed in [2] is also less efficient under the situation that there exists a large amount of motion vector codewords in the coded bitstream. Hence, we proposed a lightweight encryption framework through the modification of Huffman tables and it was implemented and verified to be efficient while embedded in H.263+ codec. In order to construct an encryption system which could still own the fast coding speed and better compression efficiency. First, we propose to scramble the fixed-length code (FLC) tables and variable-length code (VLC) tables by using the splay tree algorithm. Next, we use the chaotic algorithm to produce the secret key more promptly and confidentially. Furthermore, different configurations of security can be achieved by easily adapting our system.
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Wei-I., and 吳威怡. "Bit- and Trellis- Based Joint Huffman and Convolutional Sequential Decoding Algorithms." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/05464860396541934297.

Full text
Abstract:
碩士
國立暨南國際大學
資訊工程學系
96
According to the Shannon’s separation theory, the performance of the overall system is optimal while the source coding and the channel coding are separately optimized. However, due to the constraints on complexity and delay, the performance of separate decoding is usually not optimal. In the past, in order to further improve the performance of separate decoding, the residual redundancy left after compression, the source priori information, and the channel statictical information are exploited and fully utilized for presenting a so-called joint source-channel decoding (JSCD) scheme. In tradition, the trellis adopted in Viterbi decoding algorithm will become tremendously large while all the source and channel information are utilized. Although the decoding performance is optimal, the decoding complexity becomes quite expensive. Therefore, it is not practical. In this work, a new maximum a posteriori probability (MAP) metric with lower computational complexity is derived first, and then we propose a bit- and trellis- based jointly sequential decoding algorithm along with a suboptimal solution. Simulation results indicate the suboptimal method can provide nearly the same performance as optimal scheme while exhibiting a significantly lower complexity.
APA, Harvard, Vancouver, ISO, and other styles
12

Hussain, Fatima Omman. "Indirect text entry interfaces based on Huffman coding with unequal letter costs /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2008. Graduate Programme in Science.
Typescript. Includes bibliographical references (leaves 223-232). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965
APA, Harvard, Vancouver, ISO, and other styles
13

ho, Han-chang, and 何函璋. "A Fast Huffman Decoding Algorithm by Multiple Bit Length Search Scheme for the MPEG-4 AAC." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/04149818862470676859.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
97
Huffman coding is an important part in MPEG-4 Advanced Audio Coding(AAC)standard. The function of Huffman coding is to encode the input data stream with shorter length of bit string so that the audio samples can be compressed into smaller size. The compressed data can be recovered by Huffman decoding in decoder side. The process of Huffman decoding requires long delay because long computational time is consumed by the excessive number of searches for the decoded symbols. In this paper, a fast Huffman decoding algorithm is proposed to reduce the number of searches required in Huffman decoding. The algorithm uses a forward search scheme that 5-bit length is used for each search basis for decoding the symbols. The experimental results show that a 5-bit search scheme can be hit with the probability of 72% to decode the symbols. Thus the proposed method can reduce the number of searches, thus lead to the reduced processing delay and increasing processing speed for Huffman decoding. Comparing with other algorithms for Huffman decoding, the proposed algorithm can reduce the number of searches by 27% to 58%, the number of instructions by 40% to 67%.
APA, Harvard, Vancouver, ISO, and other styles
14

Shih, Jing-Wun, and 施景文. "FPGA implementations of IP routing-lookup and Huffman decoder algorithms based on Binary Decision Diagrams." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/31059390867997949287.

Full text
Abstract:
碩士
龍華科技大學
電子系碩士班
95
Since Binary Decision Diagram technique has the function of node sharing, it can eliminate redundant nodes and reduce the cost of combination circuit efficiently. In this paper, we have implemented an IP routing lookup and Huffman decoder with Binary Decision Diagram in FPGA. The Internet traffic is rapidly growing because of the increase of users and the multimedia applications. With the increased traffic, fast link speed and routers are required. Currently, since gigabit links are available, the chief bottleneck of the traffic has been moved from the links to the routers. In this paper, we have designed and implemented a new IP table lookup algorithm with binary decision diagrams using FPGA. We have implemented a simple routing table example with Stratix EP1S40F780C5 FPGA of Altera. The simulation and experimental results show that the proposed scheme can perform 10^8 lookups per second. If the average size of each packet is 40 bytes, the scheme can support up to 4Gbps link speed. It would support faster link speed if we use modern VLSI technology to implement the algorithm. On the other hand, we have implemented Huffman decoders by using Finite State Machine and Multiplexer, respectively. By using Binary Decision Diagrams to reduce the Huffman tree, we have once again implemented Huffman decoders based on Finite State Machine and Multiplexer, respectively. To verify the theorem, we have implemented a simple Huffman coding example with Stratix EP1S40F780C5 FPGA of Altera. The simulation and experimental results show that the Finite State Machine can decode 2*10^7 times per second, and the Multiplexer can decode 10^8 times per second.
APA, Harvard, Vancouver, ISO, and other styles
15

Cruz, Kouichi Julián Andrés. "Desarrollo de un algoritmo de compresión de datos optimizado para imágenes satelitales." Bachelor's thesis, 2017. http://hdl.handle.net/11086/5863.

Full text
Abstract:
Tesis (Lic. en Ciencias de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2017.
Las imágenes satelitales son cada vez de mayor tamaño, de tal manera que hoy en dı́a hablar en términos de gigabytes ya es normal. A la hora de generar productos que formen un mosaico también nos encontramos con grandes volúmenes lo cual no solo dificulta el procesamiento sino también la transferencia o distribución de los mismos hacia los usuarios. Finalmente, también tenemos el problema del manejo de los datos tanto a bordo de la plataforma del satélite como de su bajada a tierra. En esta tesis, se desarrollará e implementará un algoritmo de compresión con pérdida orientado a resolver esta problemática, utilizando la Transformada Discreta de Wavelets y la codificación Huffman.
Satellite images are increasing in size, to the point where speaking in the order of Gigabytes is normal. We also find big volumes of data when generating products such as mosaics, which makes both their processing and their transfer to end users more difficult. Finally, we also have large volumes of data in both the satellites themselves and in their transfer back to Earth. In this thesis, we design and implement a lossy compression algorithm targeted to solve this topic, making use of the Discrete Wavelet Transform and the Huffman coding.
APA, Harvard, Vancouver, ISO, and other styles
16

Saradha, R. "Malware Analysis using Profile Hidden Markov Models and Intrusion Detection in a Stream Learning Setting." Thesis, 2014. http://hdl.handle.net/2005/3129.

Full text
Abstract:
In the last decade, a lot of machine learning and data mining based approaches have been used in the areas of intrusion detection, malware detection and classification and also traffic analysis. In the area of malware analysis, static binary analysis techniques have become increasingly difficult with the code obfuscation methods and code packing employed when writing the malware. The behavior-based analysis techniques are being used in large malware analysis systems because of this reason. In prior art, a number of clustering and classification techniques have been used to classify the malwares into families and to also identify new malware families, from the behavior reports. In this thesis, we have analysed in detail about the use of Profile Hidden Markov models for the problem of malware classification and clustering. The advantage of building accurate models with limited examples is very helpful in early detection and modeling of malware families. The thesis also revisits the learning setting of an Intrusion Detection System that employs machine learning for identifying attacks and normal traffic. It substantiates the suitability of incremental learning setting(or stream based learning setting) for the problem of learning attack patterns in IDS, when large volume of data arrive in a stream. Related to the above problem, an elaborate survey of the IDS that use data mining and machine learning was done. Experimental evaluation and comparison show that in terms of speed and accuracy, the stream based algorithms perform very well as large volumes of data are presented for classification as attack or non-attack patterns. The possibilities for using stream algorithms in different problems in security is elucidated in conclusion.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography