Dissertations / Theses on the topic 'Differential and Huffman coding'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Differential and Huffman coding.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Románek, Karel. "Nezávislý datalogger s USB připojením." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219113.
Full textKilic, Suha. "Modification of Huffman Coding." Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21449.
Full textZou, Xin. "Compression and Decompression of Color MRI Image by Huffman Coding." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17029.
Full textGriffin, Anthony. "Coding CPFSK for differential demodulation." Thesis, University of Canterbury. Electrical and Electronic Engineering, 2000. http://hdl.handle.net/10092/6031.
Full textSong, Lingyang. "Differential space-time coding techniques and MIMO." Thesis, University of York, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434157.
Full textNelson, Tom. "ALAMOUTI SPACE-TIME CODING FOR QPSK WITH DELAY DIFFERENTIAL." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/607483.
Full textSpace-time coding (STC) for QPSK where the transmitted signals are received with the same delay is well known. This paper examines the case where the transmitted signals are received with a nonnegligible delay differential when the Alamouti 2x1 STC is used. Such a differential can be caused by a large spacing of the transmit antennas. In this paper, an expression for the received signal with a delay differential is derived and a decoding algorithm for that signal is developed. In addition, the performance of this new algorithm is compared to the standard Alamouti decoding algorithm for various delay differentials.
Wong, K. H. J. "Adaptive differential pulse code modulation and sub-band coding of speech signals." Thesis, University of Southampton, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380170.
Full textYoshida, K. "Speech coding by adaptive differential pulse code modulation with adaptive bit allocation." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37905.
Full textKarlsson, Joakim. "Differential and co-expression of long non-coding RNAs in abdominal aortic aneurysm." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-236141.
Full textMachado, Lennon de Almeida. "Busca indexada de padrões em textos comprimidos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-09062010-222653/.
Full textPattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
Nelson, Tom. "SPACE-TIME CODED SOQPSK IN THE PRESENCE OF DIFFERENTIAL DELAYS." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605785.
Full textThis paper presents a method of detecting the Tier I modulation SOQPSK when it is used in a space-time coded (STC) system in which there is a non-negligible differential delay between the received signals. Space-time codes are useful to eliminate data dropouts which occur on aeronautical telemetry channels in which transmit diversity is employed. The proposed detection algorithm employs a trellis to detect the data while accounting for the offset between the in-phase and quadrature-phase components of the signals as well as the differential delay. The performance of the system is simulated and presented and it is shown that the STC eliminates the BER floor which results from the data dropouts.
Nelson, N. Thomas. "Space-Time Coding with Offset Modulations." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2155.pdf.
Full textChembil, Palat Ramesh. "VT-STAR design and implementation of a test bed for differential space-time block coding and MIMO channel measurements." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35712.
Full textMaster of Science
Schleimer, Jan Hendrik. "Spike statistics and coding properties of phase models." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2013. http://dx.doi.org/10.18452/16788.
Full textThe goal of the thesis is to establish quantitative, analytical relations between the biophysical properties of nerve membranes and the performed neuronal computations for neurons in a tonically spiking regime and in the presence of intrinsic noise. For this purpose, two major lines of investigation are followed. Firstly, microscopic noise caused by the stochastic opening and closing of ion channels is mapped to the macroscopic spike jitter that affects neural coding. The method is generic enough to allow one to treat Markov channel models with complicated, high-dimensional state spaces and calculate from them the noise in the coding variable, i.e., the spike time. Secondly, the suprathreshold filtering properties of neurons are derived, based on the phase response curves (PRCs) by perturbing the associated Fokker-Planck equations. It turns out that key characteristics of the filter, such as the DC component of the gain and the behaviour near the fundamental frequency and its harmonics are related to the particular Fourier components of the PRC and hence the bifurcation type of the neuron. With the help of the derived filter and further approximations one is able to calculate the frequency resolved signal-to-noise ration and finally the total information transmission rate of a conductance based model. Using the method of numerical continuation it is possible to calculate the change in spike time noise level as well as the filtering properties for arbitrary changes in biophysical parameter such as varying channel densities or mean input to the cell. We extend the phase reduction to include correction terms from the amplitude dynamics that are related to the curvature of the isochrons and provide a method to identify the required amplitude sensitivities numerically. It can be shown that the curvature of the isochron has a direct consequence for the noise induced frequency shift.
Kailasanathan, Chandrapal. "Securing digital images." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.150935/index.html.
Full textNeal, Beau C. "Performance of MIMO Space-Time Coding Algorithms on a Parallel DSP Test Platform." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1888.pdf.
Full textDeshpande, Nikhil 1978. "Matlab implementation of GSM traffic channel [electronic resource] / by Nikhil Deshpande." University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000167.
Full textDocument formatted into pages; contains 62 pages
Thesis (M.S.E.E.)--University of South Florida, 2003.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
ABSTRACT: The GSM platform is a extremely successful wireless technology and an unprecedented story of global achievement. The GSM platform is growing and evolving and offers an expanded and feature-rich voice and data enabling services. General Packet Radio Service, (GPRS), will have a tremendous transmission rate, which will make a significant impact on most of the existing services. Additionally, GPRS stands ready for the introduction of new services as operators and users, both business and private, appreciate the capabilities and potential that GPRS provides. Services such as the Internet, videoconferencing and on-line shopping will be as smooth as talking on the phone. Moreover, the capability and ease of access to these services increase at work, at home or during travel. In this research the traffic channel of a GSM system was studied in detail and simulated in order to obtain a performance analysis. Matlab, software from Mathworks, was used for the simulation.
ABSTRACT: Both the forward and the reverse links of a GSM system were simulated. A flat fading model was used to model the channel. Signal to Noise Ratio, (SNR), was the primary metric that was varied during the simulation. All the building blocks for a traffic channel, including a Convolutional encoder, an Interleaver and a Modulator were coded in Matlab. Finally the GPRS system, which is an enhancement of the GSM system for data services was introduced.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
Deshpande, Nikhil. "Matlab implementation of GSM traffic channel." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000167.
Full textYusuf, Idris A. "Optimising cooperative spectrum sensing in cognitive radio networks using interference alignment and space-time coding." Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/21106.
Full textOwojaiye, Gbenga Adetokunbo. "Design and performance analysis of distributed space time coding schemes for cooperative wireless networks." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/8970.
Full textLúdik, Michal. "Porovnání hlasových a audio kodeků." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219793.
Full textFriedrich, Tomáš. "Komprese DNA sekvencí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237222.
Full textHad, Filip. "Komprese signálů EKG nasnímaných pomocí mobilního zařízení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316832.
Full textPitzalis, Nicolas. "Plant-virus interactions : role of virus- and host-derived small non-coding RNAs during infection and disease." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAJ103.
Full textIn this thesis, I investigated the role of host- and virus-derived sRNAs during infection of Rapeseed (Brassica napus, Canola) by the UK1 strain of Turnip mosaic virus (TuMV-UK1). By using a TuMV derivative tagged with a gene encoding green fluorescent protein (TuMV-GFP), two rapeseed cultivars (‘Drakkar’ and ‘Tanto’) that differ in susceptibility to this virus were identified. Transcriptional profiling of local infection foci in Drakkar and Tanto leaves by next generation sequencing (NGS) revealed numerous differentially expressed genes. The same RNA samples from mock- and virus- treated Drakkar and Tanto leaves were also used for the global NGS profiling of sRNAs (sRNAseq) and their potential RNA targets (PAREseq). The bioinformatic analysis and their in vivo validation led to the identification of transcript cleavage events involving known and yet unknown miRNAs. Importantly, the results indicate that TuMV hijacks the host RNA silencing pathway with siRNAs derived from its own genome (vsiRNAs) to target host genes. The virus also triggers the widespread targeting of host messenger RNAs (mRNAs) through activation of phased, secondary siRNA production from PHAS loci. In turn, both vsiRNAs and host-derived siRNAs (hsRNAs) target and cleave the viral RNA by the RISC-mediated pathway. These observations illuminate the role of host and virus-derived sRNAs in the coordination of virus infection. Another chapter of this thesis is dedicated to the analysis of virus-induced diseases by using Arabidopsis plants infected with the Oilseed rape mosaic tobamovirus (ORMV) as a model. Initially, the infected plants develop leaves with strong disease symptoms. However, at a later stage, disease-free, “recovered” leaves start to appear. Analysis of symptoms recovery led to the identification of a mechanism in which the VSR and virus derived-siRNAs play a central role. I used Arabidopsis mutants impaired in transcriptional and post-transcriptional silencing pathways (TGS and PTGS respectively) and a plant line carrying a promoter-driven GFP transgene silenced by PTGS (Arabidopsis line 8z2). Using various techniques able to monitor virus infection, small and long viral RNA molecules, VSR activity, as well as phloem-mediated transport with in these lines, this study led to the identification of genes required for disease symptoms and disease symptom recovery. Moreover, the observations allowed to propose a model in which symptoms recovery occurs upon robust delivery of antiviral secondary vsiRNAs from source to sink tissues, and establishment of a vsiRNA dosage able to block the VSR activity involved in the formation of disease symptoms
Krejčí, Michal. "Komprese dat." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217934.
Full textAbo, Khayal Layal. "Transkriptomická charakterizace pomocí analýzy RNA-Seq dat." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-369382.
Full textOndra, Josef. "Komprese signálů EKG s využitím vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217209.
Full textArnison, Matthew Raphael. "Phase control and measurement in digital microscopy." University of Sydney. Physics, 2003. http://hdl.handle.net/2123/569.
Full textAlmeida, João Paulo Pereira de. "O transcritoma antisense primário de Halobacterium salinarum NRC-1." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/17/17131/tde-15012019-101127/.
Full textAntisense RNAs (asRNAs) constitute the most numerous class of non-coding RNAs (ncRNAs) detected by transcriptome highthroughput methods in prokaryotes. Despite this abundance, little is known about regulatory mechanisms and evolutionary aspects of these molecules, mainly in archaea, where the mechanism of double-strand RNA (dsRNA) degradation remains poorly understood. In this study, using dRNA-seq data, we identified 1626 antisense transcription start sites (aTSSs) in the genome of Halobacterium salinarum NRC-1, an important model organism for gene expression regulation studies in Archaea. By integrating gene expression data from 18 RNA-seq paired-end libraries, we were able to annotate 846 asRNAs from mapped aTSSs. We found asRNAs in ~21% of annotated genes including genes related to important characteristics of this organism, such as: gas vesicle proteins, bacteriorhodopsin, translation machinery and transposases. We also found asRNAs in type II toxin-antitoxin systems and using public dRNA-seq data, we show evidences that this phenomenon might be conserved in archaea and bacteria. The interaction of a ncRNA with its target may depend on intermediary proteins action. In archaea, the LSm protein is a RNA chaperone homologous to bacterial Hfq, involved in post-transcriptional regulation. We used RIP-seq data from RNAs immunoprecipitated with LSm and identified 91 asRNAs interacting with this protein, for 81 of these the mRNA of the sense gene is also interacting. We searched for aTSSs present in the same region of orthologous genes in the Haloferax volcanii. We found 160 aTSSs that originated asRNAs in H. salinarum NRC-1 that might be conserved in this two archaea. The expression of annotated asRNAs was analyzed over a growth curve and in a knockout strain for RNase R gene. We found 144 asRNA differentially expressed over the growth curve, for 56 of these the sense gene was also differentially expressed, characterizing possible cis regulators asRNAs. In the knockout strain we found five differentially expressed asRNAs and only one asRNA/gene pair, this result does not allow us to infer a dsRNA degradation in vivo activity for this RNase in H. salinarum NRC- 1. This work contributes to the discovery of the antisense transcriptome in H. salinarum NRC- 1 a relevant step to uncover the post-transcriptional gene regulatory network in this archaeon.
Štys, Jiří. "Implementace statistických kompresních metod." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-413295.
Full textChang, Chih-Peng, and 張志鵬. "Segmented Vertex Chain Coding with Huffman Coding." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/56936669407017461858.
Full text朝陽科技大學
資訊工程系碩士班
96
To significantly decrease the amount of information while still preserving the contour shape, chain coding is widely applied to digital images analysis, especially to those raster-shaped ones. In this paper, chain coding is integrated with the Single-side Grown Huffman Table (SGHT) to improve the data compression rate.
Baltaji, Najad Borhan. "Scan test data compression using alternate Huffman coding." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5615.
Full texttext
Zheng, Li-Wen, and 鄭力文. "Personalize metro-style user interface by Huffman coding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/rdxj52.
Full text國立臺灣大學
工程科學及海洋工程學研究所
105
In order to satisfy the needs of visual information for user interface, Microsoft proposed the metro UI design with its dynamic bricks to show the importance of the various functions in operating system and got lots of attention, so that many websites have begun to follow this concept designing their user interface. This research uses this concept to build the new user interface with the Web portal and dynamically present usage requirements. However, the size of the current dynamic brick and the layout are required to be set in advance, and it has no effective method for automatic calculation. In this study, an automated Metro UI user interface algorithm is proposed to automatically calculate the dynamic brick size and layout based on the user''s usage frequency of the system functions through Huffman coding. The experimental results show that the method of dynamic Metro UI, which is dynamically presented in this research, is more suitable for the user to adjust the operation experience according to the different needs compared with the traditional static and fixed size Metro UI.
Γρίβας, Απόστολος. "Μελέτη και υλοποίηση αλγορίθμων συμπίεσης." Thesis, 2011. http://nemertes.lis.upatras.gr/jspui/handle/10889/4336.
Full textIn this thesis we study some data compression algorithms and implement them. The basic principles of coding are mentioned and the mathematical foundation of information theory is presented. Also different types of codes are presented. Then the Huffman coding and arithmetic coding are analyzed in detail. Finally, the two codings are implemented on a computer using the C programming language in order to compress text files. The resulting files are compared with files that are compressed using commercial programmes, the causes of differences in the efficiency are analyzed and useful conclusions are drawn.
Wang, Ruey-Jen, and 王瑞禎. "On the Design and VLSI architecture for Dynamic Huffman Coding." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/98190957224448119235.
Full text國立成功大學
電機工程研究所
82
Huffman coding is a lossless data compression technique that achieves compact data representation by taking advantage of the statistical characteristic of the source. It is widely used in many various data compression applications , such as high definition television , disk operation system , video coding , and large data communication ....。 Dynamic huffman coding (DHC) can compress any data file without preview. Compared with the Adaptive huffman coding , the DHC method requires a fewer memories and needs no side informations . Compared with the static huffman coding , the DHC method achieves a better compression ratio。 In this papper , the modified algorithm and the CAM_based architectures for DHC have been presented . The output thoughput of the encoder is 1bit/cycle . Based on the architecture , the DHC encoder chip is implemented . The chip has gate count of 17652 and die area of 4.8mm*4.8mm by using TSMC 0.8um spdm process . From the result of timing analysis , the work frequency is about 20 Mhz。
Huang, Ya-Chen, and 黃雅臻. "Efficient Test Pattern Compression Techniques Based on Complementary Huffman Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/93893687721312782837.
Full text輔仁大學
電子工程學系
97
In this thesis, complementary Huffman encoding techniques are proposed for test data compression of complex SOC designs during manufacturing testing. The correlations of blocks of bits in a test data set are exploited such that more test blocks can share the same codeword. Therefore, besides the compatible blocks used in previous works, the complementary property betweens test blocks can also be used. Based on this property, two algorithms are proposed for Huffman encoding. According to these techniques, more test blocks can share the same codeword and the size of the Huffman tree can be reduced. This will not only reduce the area overhead of the decoding circuitry but also substantially increase the compression ratio. In order to facilitate the proposed complementary encoding techniques, a don’t-care assignment algorithm is also proposed. According to experimental results, the area overhead of the decompression circuit is lower than that of the full Huffman coding technique. Moreover, the compression ratio is higher than that of the selective and optimal selective Huffman coding techniques proposed in previous works.
Liu, Chia-Wei, and 劉家維. "A Dynamic Huffman Coding Method for TLC NAND Flash Memory." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ujktq2.
Full text國立臺灣科技大學
電子工程系
107
Recently, NAND flash memory has gradually replaced the traditional Hard-Disk Drives and become the most mainstream storage device. NAND flash memory has many advantages such as non-volatile features, small size, low-power consumption, fast-access speed, and shock resistance, etc. With the advance of the process, NAND flash memory has evolved from single-level cell (SLC) and multi-level cell (MLC) into triple-level cell (TLC) or even quad-level cell (QLC). Although NAND flash memory has many advantages, it also has many physical problems such as the characteristic of erase-before-write, the limitation of P/E Cycles, etc. Moreover, TLC NAND flash memory has the problems of low reliability and short lifetime. Thus, we propose a dynamic Huffman coding method, which can apply to the write operation of NAND flash memory. Our method can select a suitable type of Huffman coding for different kinds of data dynamically and improve the VTH distribution of NAND flash memory to reduce the bit-error-rate and improve the reliability of NAND flash memory.
Lin, Che-Lung, and 林哲論. "Bounded Error Huffman Coding in Applications of Wireless Sensor Networks." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/61263794337918009117.
Full text國立臺灣大學
工程科學及海洋工程學研究所
102
The measurement error in realistic application for WSN exists due to hardware limitation and application conditions. On the other hand, how to prolong the life time of WSN is an important issue due to the limitation of battery capacity in sensors. In previous research, both Bounded Error Data Compression (BEC) and Improved Bounded Error Data Compression (IBEC), used the bounded error in data compression under the condition of allowing data error to reduce the power consumption in WSN. Unlike BEC and IBEC, Bounded Error Huffman Coding (BEHC) proposed in this thesis uses the bounded error in Huffman coding. In data correlation compression, the compression ratio would be improved by that avoiding the excess bit composing the code and eliminating the defect of compressing the data under bounded error to longer code. In addition, after the research and observation of IBEC in this thesis, it shows that the data format and spatial correlation compression proposed by IBEC still has the defect. Therefore, New Improved Bounded Error Data Compression (NIBEC) which uses BEHC in off-line would be proposed to improve the data format and spatial correlation compression for higher effectiveness of compression. In experiment result, four type raw data which have different correlation would be experimented and the results would compare with IBEC. The result shows that NIBEC improved 27%~47% compression ratio and reduced 25%~43% power consumption, and it proved that NIBEC improved the compression effectively.
Chen, Sze Ching, and 陳思靜. "A Line-Based Lossless Display Frame Compressor Using Huffman Coding and Longest Prefix Match Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/75028784547709858315.
Full text國立清華大學
資訊工程學系
104
We propose a lossless video frame compression algorithm employing a dictionary coding method, the Huffman coding method and three schemes to achieve high compression ratio. We observe the smaller the absolute value of the differentials between the current pixel and its neighbors the higher the probability is by analyzing the distribution of this differentials. According to this distribution, we compute the data reduction ratio (DRR) for cases using different numbers of code words and find the more code words used the higher the DRR which approached a plateau. Considering memory usage, we choose a suitable number of code words for Huffman encoding. We employ a two-staged classification (TC) scheme consisting of the dictionary coding method and a longest prefix match (LPM) method. The LPM method we choose for each pixel group a best truncation length (BTL) using an adaptive prefix bit truncation (APBT) scheme. We further compress the code words by a head code compression (HCC) scheme. Due to large numbers of code words used, we can achieve about 0.5% more bit rate reduction compared to previous proposed algorithm and only 0.96% bit rate reduction less than using the maximum dictionary size.
Li, Chih Han, and 李致翰. "A Framework for EEG Compression with Compressive Sensing and Huffman Coding." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/28431597266494445081.
Full text國立清華大學
電機工程學系
103
Compressive sensing (CS) is an emerging technique for data compression in recent years. In this thesis, it is used to compress electroencephalogram (EEG) signals. CS includes two major principles. The one is the sparsity, and the other is incoherence. However, the EEG signal is not sparse enough. Thus, CS can only recover the compressed EEG signals in low compression ratios. Under high compression ratios, the recovery of compressed EEG signals fails after the compression. The compression ratios where EEG can be reconstructed with high quality is not high enough to let the system become energy-efficient, so the compression will be not meaningful. Thus, we want to find a solution to make CS become practical in compressing EEG signals when high compression ratios are adopted. From surveying literatures, the approaches to increase performance in CS can be separated into three classes. First, design a more strong reconstruction algorithm. Second, find a dictionary where the EEG signals can have sparse presentation in such transform domain. Lastly, combine the CS with other compression techniques. Here we take the first and third approaches to achieve the goal. First of all, we proposed a modified iterative pseudo-inverse multiplication (MIPIM) with the complexity O(KMN) where M is the dimension of the measurements, N is the dimension of the signal, and K is the sparse level. This complexity is lower than the most existing algorithms. Next, we extend MIPIM into a multiple measurements (MMV) algorithm. It is called as simultaneously MIPIM (SMIPIM). This aims at recovering all channel signals at the same time and taking the correlation among channels to increase performance. The SMIPIM can reduce normalized mean square error (NMSE) by 0.06 comparing with the classical algorithms in CS. For the part of combining the CS with other compression techniques, we adopt an existing framework which takes an information from server or receiver node to combine CS and Huffman coding efficiently. The framework was proposed to increase the compression to apply to the telemedicine with EEG signals, but we found a shortcoming. It takes a long computational time on running the algorithm which produces information. It will make the instant telemedicine unavailable because sensors can not transmit data until the information are received. Therefore, we propose an algorithm to replace the existing one. The complexity changes from O(L^5) to O(L^2) where L is the number of channels. In our experiment, our algorithm is faster 10^5 times than the existing one. Finally, we carried out the simulation of entire system. We simulated the framework with our proposed algorithm for computing the information of correlation of channels and our SMIPIM for reconstruction. In a compression ratio 3 : 1, the NMSE is 0.0672 , and the original CS framework with Block Sparse Bayesian Learning Bound Optimization (BSBLBO) is 0.1554. On the other hand, depending on the minimum acceptable NMSE which is 0.09 for EEG signals, we have a compression ratio 0.31. Moreover, we take the compression ratio to estimate how many channels we can transmit in a fixed transmission bandwidth. The result shows that the number of channels can increase 16 with Bluetooth 2.0 and 35 with ZigBee for wireless transmission after the work.
Tung, Chi-Yang, and 董啟揚. "A New Method of Image Compression by Wavelet Combining Huffman Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/97960665547728924728.
Full text中原大學
電機工程研究所
104
This study proposes a new method of image compression by wavelet combining Huffman coding in order to reduce storage space, increase transmission speed and make the image quality better. In this thesis, we propose a new method of image compression by introducing Huffman coding. First, we implement image compression by wavelet transform. In this study the wavelet transform is used as our original case and the wavelet combining Huffman coding is as our improvement case. Second, we make the image after the wave let transform to be encode by Huffman coding. Third, we make simulation of image compression by MATLAB. Then we compress color images and gray-level images to calculate the quality of compressed images by PSNR (peak signal-to-noise ratio) value. According to our simulations, the performance of wavelet combining Huffman coding will be significantly better than wavelet transform. The performance also achieves our ideal requirement. In this study, the results of our research are as follows: 1.Reduce Storage Space In this study, we use Huffman coding to encode the image has compressed. It can reduce storage space by using wavelet transform combining Huffman coding. 2.Increase Transmission Speed Due to using Huffman coding to encode the image, we can figure out the file of image is comparatively small than image file of original case. If file image is more small, it is beneficial to increase transmission speed. 3.make the image quality better In this study, we calculate the quality of compressed images by PSNR (peak signal-to-noise ratio) value. So we can know not only image compressed smaller but also better quality
Li, Jyun-Ruei, and 李俊叡. "The Study of RFID Authentication Schemes Using Message Authentication and Huffman Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/98965119282757619915.
Full text亞洲大學
資訊工程學系碩士班
97
As the RFID technology becomes mature and its manufacturing cost is reduced constantly, this technology has already widely used in a lot of field such as supply chain management, entrance guard system, intelligent home appliances, electronic payment, production automation, etc. While RFID technology brings enormous commercial value, and the usage is simple and convenient, it threatens security and privacy of individuals and organizations. In this thesis, we introduce the problem in privacy and security. And then we proposed a new idea. We use Huffman coding to encode the tag ID. And we use the hash function to augment the data security. Our scheme provides each RFID tag to emit a pseudonym when receiving every reader's query. Therefore, it makes tracking activities and personal preferences of tag's owner impractical to provide the user's privacy. In addition, our proposed scheme provides not only high-security but also high-e±ciency.
Hussain, Fatima Omman. "Indirect text entry interfaces based on Huffman coding with unequal letter costs /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965.
Full textTypescript. Includes bibliographical references (leaves 223-232). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965
Syu, Wei-Jyun, and 許瑋峻. "A Study of Reversible Data Hiding Based on SMVQ and Huffman Coding." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/5uj776.
Full text國立虎尾科技大學
資訊工程研究所
101
The data hiding technology not only embed the high-payload secret data into digital image, but also could reconstruct the original cover image at the receiver. “A reversible and high-payload data hiding scheme implemented in the SMVQ compression domain of image” is proposed in this thesis. The idea of this scheme is to hide secret data into the compression codes of image by utilizing the sorted state codebook of SMVQ. The compression codes of image reversibly reconstructed original VQ-compressed cover image in the proposed scheme. Besides, the Huffman coding technique is applied to compact the volume of the overall data needed to be transmitted. The proposed scheme significantly enhances VQ-compressed technique and achieves high embedding capacity. Eventually, the experimental results show that the proposed scheme maintains good visual quality for the reconstructed original VQ-compressed cover image. Moreover, the proposed scheme achieves the best performance among approach in literature with the average low bit rate and high embedding rate.
Chung, Wei-Sheng, and 鍾偉勝. "Proof of Violation with Adaptive Huffman Coding Hash Tree for Cloud Storage Service." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/j622gu.
Full text國立中央大學
資訊工程學系
106
Although cloud storage services are very popular nowadays, users have a problem that they do not have an effective way to prove the system is abnormal due to system errors. Users thus cannot claim a loss even when data or files are damaged by some kinds of internal errors. As a result, enterprise users often do not trust or even adopt cloud storage services due to the above-mentioned problem. We intend to design methods to solve the problem of cloud storage services. In this paper, we focus on the research of Proof of Violation (POV). All the updated files in cloud storage will be signed by users and service providers with digital signature, and their hash values are checked for detecting the occurrence of violations to ensure that both parties can trust each other. We propose Adaptive Huffman Coding Hash Tree Construction (AHCHTC) algorithm for the real-time POV of cloud storage services. The proposed algorithm dynamically adds and adjusts hash tree nodes according to the update counters of files. It consumes less execution time and memory space than an existing hash tree-based POV scheme. We further propose Adaptive Huffman Coding Hash Tree Construction/Counter Adjustment (AHCHTC/CA) algorithm to improve the AHCHTC algorithm by adjusting counters of all nodes associated with files while maintaining the hash tree structure to satisfy the sibling property. Thus, the AHCHTC/CA algorithm constructs the hash tree according to recent update counters, instead of total update counters. This can reflect recent file update patterns, and thus further improve the performance. Simulation experiments are conducted to evaluate the performance of the proposed algorithms is for the web page update patterns in NCUCCWiki and the file update patterns in the Harvard University network file system provided by SNIA (Storage Networking Industry Association) IOTTA (Input/output Traces, Tools, and Analysis) data set. The evaluated performance is compared with that of a related method. The comparisons show that the proposed algorithms are superior to the related method in terms of the computation time and the memory overhead. We also show some observations for the experimental results and possible application scenarios of the proposed algorithms.
Kuo, Chung-Wei, and 郭崇韋. "Wireless-LAN Differential Source Coding." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/87720363285216673969.
Full text逢甲大學
通訊工程所
93
The e-medical clothing integrates physiological signal sensors, a wireless data transmission module, an analyzing software on PDA or PC, and a power supply module on a specially designed clothing platform which make it a very power medical monitoring equipment. The wireless data transmission module uses radio frequency technology, in our case the Bluetooth technology, to improve the convenience of patients who need their physiological signals monitored on a regular and long term bases. Patients can move freely inside the coverage area of Bluetooth systems without any constraint where medical personnel can effectively retrieve real-time physiological data with no interference to patients. In this thesis, we will describe how to transmit physiological data on Bluetooth modules and use differential encoding techniques to compress the data before transmission to reduce the power consumption of transmissions and in turn prolong battery lifespan on e-medical clothings.
Huang, Yen-Lin, and 黃彥菱. "Entropy-based Differential Chain Coding." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/93338243002363491929.
Full text國立清華大學
資訊工程學系
89
A simple but efficient technique for encoding object contour is presented. It is based on the chain coding representation and utilizes the benefit of differential chain coding (DCC) and entropy coding. DCC let the occurrences of symbols skewed. And the entropy coding let the frequently used symbols correspond to the short codes. This thesis proposes a simple DCC-based method to adaptively trace the contour direction locally and effectively select the code symbol accordingly. The method really alters the distribution of symbol occurrences to the extreme case, thus the gain of entropy code becomes significant. We conducted several experiments to compare with DCC with entropy coding and MPEG-4 shape coder. The experimental results show that this simple method is constantly better than DCC and comparable with the MPEG-4 shape code.
Haung, Chan-Hao, and 黃展浩. "Improving the input speed of a multi-key prosthetic keyboard based on Huffman coding." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/11577293147455365349.
Full text國立暨南國際大學
資訊工程學系
99
In a conventional keyboard, like a QWERTY keyboard, there are too many keys such that the spaces between neighboring keys are too small for physical disabled. In this study we propose a novel prosthetic keyboard with reduced number of keys such that the space between neighboring keys is sufficient for physically disabled. Given only 12 keys in the designed keyboard, multiple keystrokes are required for inputting a character. Each character in encoded by using radix-12 Huffman algorithm. The code set of each character is determined by its appearance frequency in a typical typing task. The higher appearance frequency of a character, the shorter its code set. Experiments with a subject with cerebral palsy showed that the average code length of all characters is 1.48 keys per character. Given the codes sets, this study further propose a method to find the optimal keyboard arrangement using Particle Swarm Optimization (PSO) algorithm. Given the appearance frequency of each key in a typical typing task, the objective function is based on the total time required for the subject to press the keys. The optimal keyboard arrangement is one that minimizes the objective function using PSO algorithm. Experiments were conducted to compare the performances of three different input methods, including the proposed Huffman method, the dual key method, and a 6-key Mose Code method. The Mose Code input method has been used by the subject for years. A commonly-used typing speed test software was used to record the typing speed of the subject. Results showed that the proposed Huffman method can help the subject to achieve more words per minutes than other two methods.
HUANG, HSIN-HAN, and 黃信翰. "Hardware Implementation of a Real Time Lossless ECG Signal Compressor with Improved Multi-Stage Huffman Coding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/58420664973565933749.
Full text輔仁大學
電機工程學系碩士班
105
Electrocardiogram (ECG) monitoring systems are widely used in healthcare and telemedicine. The ECG signals must be compressed to enable efficient transmission and storage. In addition, real time monitoring is required. It is challenging to meet real time requirements and transmission bandwidth limit. In this paper, we propose hardware implementation of a real time lossless ECG signal compressor. Modified error predictor and multi-stage Huffman encoding algorithm are proposed. Without sacrificing hardware cost, we can use a two-stage encoding tables to realize multi-stage encoding, which has better compression efficiency. We implemented the lossless compressor hardware on an ARM-based FPGA platform. Experiments to evaluate MIT-BIH database show that the proposed work attain comparable compression performance and allow the real time data transmission under Bluetooth environment.
Lai, Po-Yueh, and 賴柏岳. "Opus 編碼器中 Range Encoding 與 Huffman Coding 壓縮效率之比較." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2rrkgd.
Full text國立臺北科技大學
資訊工程系所
105
Nowadays, streaming is the particular way to listen to the digital music online. People use the MP3 and AAC format in the past but the MP3 format is retired gradually in recent. Now there are lots of digital audio format in streaming technology and one of them is Opus Codec. In this thesis, we study the CELT Layer in Opus Codec. Use the Huffman Coding in MP3 and AAC to replace the original method PVQ and Range Encoding in CELT. Through this experiment, we can know the compression efficiency between the Range Encoding and Huffman Coding. We let this experiment separate into two parts. First, is obtaining the data from the source file. Second, code these data in MP3’s and AAC’s Huffman Coding method respectively and compare this method’s difference with the original method.