To see the other types of publications on this topic, follow the link: Differential and Huffman coding.

Dissertations / Theses on the topic 'Differential and Huffman coding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Differential and Huffman coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Románek, Karel. "Nezávislý datalogger s USB připojením." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219113.

Full text
Abstract:
This thesis treats concept of autonomous temperature, relative humidity and pressure USB datalogger. Also there is explained datalogger function, hardware design with respect on device consumption and design of chassis. Furthermore, there is described communication protocol for control and reading out data by the PC. Furthermore, there are described firmware drivers for some used components and modules for USB communication, RTC and data compression. Lastly there is described software which is used for datalogger configuration and data read out.
APA, Harvard, Vancouver, ISO, and other styles
2

Kilic, Suha. "Modification of Huffman Coding." Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zou, Xin. "Compression and Decompression of Color MRI Image by Huffman Coding." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17029.

Full text
Abstract:
MRI image (Magnetic Resonance Imaging) as a universal body checkup method in modern medicine. It can help doctors to analyze the condition of patients as soon as possible. As the medical images, the MRI images have high quality and a large amount of data, which requires more transmission time and larger storage capacity. To reduce transmission time and storage capacity, the compression and decompression technology is applied. Now most MRI images are colour, but most theses still use gray MRI images to research. Compressed color MRI images is a new research area. In this thesis, some basic theories of the compression technoloy and medical technology were firstly introduced, then basic strcture and kernel algorithm of Huffman coding were explained in detail. Finally, Huffman coding was implemented in MATLAB to compress and decompress the colour MRI images.The result of the experiment shows that the Huffman coding in colour MRI image compression can get high compression ratio and coding efficient.
APA, Harvard, Vancouver, ISO, and other styles
4

Griffin, Anthony. "Coding CPFSK for differential demodulation." Thesis, University of Canterbury. Electrical and Electronic Engineering, 2000. http://hdl.handle.net/10092/6031.

Full text
Abstract:
A differential encoder is developed that preserves the phase trellis of continuous phase frequency shift keying (CPFSK) through differential demodulation. This differential encoder interfaces well with the decomposed model of CPFSK, creating a decomposed model of differentially-encoded and differentially-demodulated CPFSK (DCPFSK). The normalised minimum squared Euclidean distance d2min of uncoded DCPFSK is calculated. A code search model is developed, allowing codes over rings to be specifically designed for DCPFSK. The results of code searches show that there is very little loss in d2min when comparing coded DCPFSK systems with coherently-demodulated coded CPFSK systems. The performance of uncoded and coded DCPFSK systems in both additive white Gaussian noise (AWGN) and Rayleigh flat fading is analysed and simulated. DCPFSK is shown to be relatively robust to medium to slowly-varying fading, without the use of any additional techniques. Rate-1/2 encoded quaternary DCPFSK with modulation index h = 1/4 is compared with coherently-demodulated uncoded MSK and differentially-encoded and differentially- demodulated minimum shift keying (DMSK) without error-control coding, in AWGN and Rayleigh flat fading. The coded system shows that significant performance improvement can be obtained through simple coding, particularly in Rayleigh flat fading.
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Lingyang. "Differential space-time coding techniques and MIMO." Thesis, University of York, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nelson, Tom. "ALAMOUTI SPACE-TIME CODING FOR QPSK WITH DELAY DIFFERENTIAL." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/607483.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Space-time coding (STC) for QPSK where the transmitted signals are received with the same delay is well known. This paper examines the case where the transmitted signals are received with a nonnegligible delay differential when the Alamouti 2x1 STC is used. Such a differential can be caused by a large spacing of the transmit antennas. In this paper, an expression for the received signal with a delay differential is derived and a decoding algorithm for that signal is developed. In addition, the performance of this new algorithm is compared to the standard Alamouti decoding algorithm for various delay differentials.
APA, Harvard, Vancouver, ISO, and other styles
7

Wong, K. H. J. "Adaptive differential pulse code modulation and sub-band coding of speech signals." Thesis, University of Southampton, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yoshida, K. "Speech coding by adaptive differential pulse code modulation with adaptive bit allocation." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karlsson, Joakim. "Differential and co-expression of long non-coding RNAs in abdominal aortic aneurysm." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-236141.

Full text
Abstract:
This project concerns an exploration of the presence and interactions of long non-coding RNA transcripts in an experimental atherosclerosis mouse model with relevance for human abdominal aortic aneurysm development. 187 long noncoding RNAs, two of them entirely novel, were found to be differentially expressed between angiotensin II treated (developing abdominal aortic aneurysms) and non-treated apolipoprotein E deficient mice (not developing aneurysms) harvested after the same period of time. These transcripts were also studied with regards to co-expression network connections. Eleven previously annotated and two novel long non-coding RNAs were present in two significantly disease correlated co-expression groups that were further profiled with respect to network properties, Gene Ontology terms and MetaCore© connections.
APA, Harvard, Vancouver, ISO, and other styles
10

Machado, Lennon de Almeida. "Busca indexada de padrões em textos comprimidos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-09062010-222653/.

Full text
Abstract:
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória.
Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
APA, Harvard, Vancouver, ISO, and other styles
11

Nelson, Tom. "SPACE-TIME CODED SOQPSK IN THE PRESENCE OF DIFFERENTIAL DELAYS." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605785.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
This paper presents a method of detecting the Tier I modulation SOQPSK when it is used in a space-time coded (STC) system in which there is a non-negligible differential delay between the received signals. Space-time codes are useful to eliminate data dropouts which occur on aeronautical telemetry channels in which transmit diversity is employed. The proposed detection algorithm employs a trellis to detect the data while accounting for the offset between the in-phase and quadrature-phase components of the signals as well as the differential delay. The performance of the system is simulated and presented and it is shown that the STC eliminates the BER floor which results from the data dropouts.
APA, Harvard, Vancouver, ISO, and other styles
12

Nelson, N. Thomas. "Space-Time Coding with Offset Modulations." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2155.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chembil, Palat Ramesh. "VT-STAR design and implementation of a test bed for differential space-time block coding and MIMO channel measurements." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35712.

Full text
Abstract:
Next generation wireless communications require transmission of reliable high data rate services. Second generation wireless communications systems use single-input multiple-output (SIMO) channel in the reverse link, meaning one transmit antenna at the user terminal and multiple receive antennas at the base station. Recently, information theoretic research has shown an enormous potential growth in the capacity of wireless systems by using multiple antenna arrays at both ends of the link. Space-time coding exploits the spatial-temporal diversity provided by the multiple input multiple output (MIMO) channels, significantly increasing both system capacity and the reliability of the wireless link. The Virginia Tech Space-Time Advanced Radio (VT-STAR) system presents a test bed to demonstrate the capabilities of space-time coding techniques in real-time. Core algorithms are implemented on Texas Instruments TMS320C67 Evaluation Modules (EVM). The radio frequency subsystem is composed of multi-channel transmitter and receiver chains implemented in hardware for over the air transmission. The capabilities of the MIMO channel are demonstrated in a non-line of sight (NLOS) indoor environment. Also to characterize the capacity gains in an indoor environment this test bed was modified to take channel measurements. This thesis reports the system design of VT-STAR and the channel capacity gains observed in an indoor environment for MIMO channels.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Schleimer, Jan Hendrik. "Spike statistics and coding properties of phase models." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2013. http://dx.doi.org/10.18452/16788.

Full text
Abstract:
Ziel dieser Arbeit ist es eine Beziehung zwischen den biophysikalischen Eigenschaften der Nervenmembran, und den ausgeführten Berechnungen und Filtereigenschaften eines tonisch feuernden Neurons, unter Einbeziehen intrinsischer Fluktuationen, herzustellen. Zu diesem Zweck werden zu erst die mikroskopischen Fluktuationen, die durch das stochastische Öffnen und Schließen der Ionenkanäle verursacht werden, zu makroskopischer Varibilität in den Zeitpunkten des Auftretens der Aktionspotentiale übersetzt, denn es sind diese Spikezeiten die in vielen sensorischen Systemen informationstragenden sind. Die Methode erlaubt es das stochastischer Verhalten komplizierter Ionenkanalstrukturen mit einer großen Zahl an Untereinheiten, in Spikezeitenvariabilität zu übersetzen. Als weiteres werden die Filtereigenschaften der Nervenzellen in der überschwelligen Dynamik, also bei Existenz eines stabilen Grenzzyklus, aus ihren Phasenantwortkurven (PAK), einer Eigenschaft des linearisierten adjungierten Flusses auf dem Grenzzyklus, in einem stöhrungstheoretischen Ansatz berechnet. Es ergibt sich, dass Charakteristika des Filter, wie beispielsweise die DC Komponente und die Eigenschaften des Filters um die Fundamentalfrequenz und ihrer Harmonien, von den Fourierkomponenten der PAK abhängen. Unter Verwendung der hergeleiteten Filter und weiterer Annahmen ist es möglich das frequenzabhängige Signal-zu-Rauschen Verhältnis zu berechnen, und damit eine untere Schranke für die Informationstransferrate eines Leitfähigkeitsmodells zu berechnen. Unter Zuhilfenahme der numerischen Kontinuierungsmethode ist es möglich die Veränderungen in der Spikevariabilität und den Filtern für jeden biophysikalischen Parameter des System zu verfolgen. Weiterhin wurde die verwendete Phasenreduktion durch eine Korrektur ergänzt, die die Radialdynamik einbezieht. Es zeigt sich, dass die Krümmung der Isochronen einen Einfluss darauf hat ob das Rauschen einen positiven oder negativen Frequenzschift hervorruft.
The goal of the thesis is to establish quantitative, analytical relations between the biophysical properties of nerve membranes and the performed neuronal computations for neurons in a tonically spiking regime and in the presence of intrinsic noise. For this purpose, two major lines of investigation are followed. Firstly, microscopic noise caused by the stochastic opening and closing of ion channels is mapped to the macroscopic spike jitter that affects neural coding. The method is generic enough to allow one to treat Markov channel models with complicated, high-dimensional state spaces and calculate from them the noise in the coding variable, i.e., the spike time. Secondly, the suprathreshold filtering properties of neurons are derived, based on the phase response curves (PRCs) by perturbing the associated Fokker-Planck equations. It turns out that key characteristics of the filter, such as the DC component of the gain and the behaviour near the fundamental frequency and its harmonics are related to the particular Fourier components of the PRC and hence the bifurcation type of the neuron. With the help of the derived filter and further approximations one is able to calculate the frequency resolved signal-to-noise ration and finally the total information transmission rate of a conductance based model. Using the method of numerical continuation it is possible to calculate the change in spike time noise level as well as the filtering properties for arbitrary changes in biophysical parameter such as varying channel densities or mean input to the cell. We extend the phase reduction to include correction terms from the amplitude dynamics that are related to the curvature of the isochrons and provide a method to identify the required amplitude sensitivities numerically. It can be shown that the curvature of the isochron has a direct consequence for the noise induced frequency shift.
APA, Harvard, Vancouver, ISO, and other styles
15

Kailasanathan, Chandrapal. "Securing digital images." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.150935/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Neal, Beau C. "Performance of MIMO Space-Time Coding Algorithms on a Parallel DSP Test Platform." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1888.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Deshpande, Nikhil 1978. "Matlab implementation of GSM traffic channel [electronic resource] / by Nikhil Deshpande." University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000167.

Full text
Abstract:
Title from PDF of title page.
Document formatted into pages; contains 62 pages
Thesis (M.S.E.E.)--University of South Florida, 2003.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
ABSTRACT: The GSM platform is a extremely successful wireless technology and an unprecedented story of global achievement. The GSM platform is growing and evolving and offers an expanded and feature-rich voice and data enabling services. General Packet Radio Service, (GPRS), will have a tremendous transmission rate, which will make a significant impact on most of the existing services. Additionally, GPRS stands ready for the introduction of new services as operators and users, both business and private, appreciate the capabilities and potential that GPRS provides. Services such as the Internet, videoconferencing and on-line shopping will be as smooth as talking on the phone. Moreover, the capability and ease of access to these services increase at work, at home or during travel. In this research the traffic channel of a GSM system was studied in detail and simulated in order to obtain a performance analysis. Matlab, software from Mathworks, was used for the simulation.
ABSTRACT: Both the forward and the reverse links of a GSM system were simulated. A flat fading model was used to model the channel. Signal to Noise Ratio, (SNR), was the primary metric that was varied during the simulation. All the building blocks for a traffic channel, including a Convolutional encoder, an Interleaver and a Modulator were coded in Matlab. Finally the GPRS system, which is an enhancement of the GSM system for data services was introduced.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
18

Deshpande, Nikhil. "Matlab implementation of GSM traffic channel." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yusuf, Idris A. "Optimising cooperative spectrum sensing in cognitive radio networks using interference alignment and space-time coding." Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/21106.

Full text
Abstract:
In this thesis, the process of optimizing Cooperative Spectrum Sensing in Cognitive Radio has been investigated in fast-fading environments where simulation results have shown that its performance is limited by the Probability of Reporting Errors. By proposing a transmit diversity scheme using Differential space-time block codes (D-STBC) where channel state information (CSI) is not required and regarding multiple pairs of Cognitive Radios (CR's) with single antennas as a virtual MIMO antenna arrays in multiple clusters, Differential space-time coding is applied for the purpose of decision reporting over Rayleigh channels. Both Hard and Soft combination schemes were investigated at the fusion center to reveal performance advantages for Hard combination schemes due to their minimal bandwidth requirements and simplistic implementation. The simulations results show that this optimization process achieves full transmit diversity, albeit with slight performance degradation in terms of power with improvements in performance when compared to conventional Cooperative Spectrum Sensing over non-ideal reporting channels. Further research carried out in this thesis shows performance deficits of Cooperative Spectrum Sensing due to interference on sensing channels of Cognitive Radio. Interference Alignment (IA) being a revolutionary wireless transmission strategy that reduces the impact of interference seems well suited as a strategy that can be used to optimize the performance of Cooperative Spectrum Sensing. The idea of IA is to coordinate multiple transmitters so that their mutual interference aligns at their receivers, facilitating simple interference cancellation techniques. Since its inception, research efforts have primarily been focused on verifying IA's ability to achieve the maximum degrees of freedom (an approximation of sum capacity), developing algorithms for determining alignment solutions and designing transmission strategies that relax the need for perfect alignment but yield better performance. With the increased deployment of wireless services, CR's ability to opportunistically sense and access the unused licensed frequency spectrum, without causing harmful interference to the licensed users becomes increasingly diminished, making the concept of introducing IA in CR a very attractive proposition. For a multiuser multiple-input-multiple-output (MIMO) overlay CR network, a space-time opportunistic IA (ST-OIA) technique has been proposed that allows spectrum sharing between a single primary user (PU) and multiple secondary users (SU) while ensuring zero interference to the PUs. With local CSI available at both the transmitters and receivers of SUs, the PU employs a space-time WF (STWF) algorithm to optimize its transmission and in the process, frees up unused eigenmodes that can be exploited by the SU. STWF achieves higher performance than other WF algorithms at low to moderate signal-to-noise ratio (SNR) regimes, which makes it ideal for implementation in CR networks. The SUs align their transmitted signals in such a way their interference impairs only the PU's unused eigenmodes. For the multiple SUs to further exploit the benefits of Cooperative Spectrum Sensing, it was shown in this thesis that IA would only work when a set of conditions were met. The first condition ensures that the SUs satisfy a zero interference constraint at the PU's receiver by designing their post-processing matrices such that they are orthogonal to the received signal from the PU link. The second condition ensures a zero interference constraint at both the PU and SUs receivers i.e. the constraint ensures that no interference from the SU transmitters is present at the output of the post-processing matrices of its unintended receivers. The third condition caters for the multiple SUs scenario to ensure interference from multiple SUs are aligned along unused eigenmodes. The SU system is assumed to employ a time division multiple access (TDMA) system such that the Principle of Reciprocity is employed towards optimizing the SUs transmission rates. Since aligning multiple SU transmissions at the PU is always limited by availability of spatial dimensions as well as typical user loads, the third condition proposes a user selection algorithm by the fusion centre (FC), where the SUs are grouped into clusters based on their numbers (i.e. two SUs per cluster) and their proximity to the FC, so that they can be aligned at each PU-Rx. This converts the cognitive IA problem into an unconstrained standard IA problem for a general cognitive system. Given the fact that the optimal power allocation algorithms used to optimize the SUs transmission rates turns out to be an optimal beamformer with multiple eigenbeams, this work initially proposes combining the diversity gain property of STBC, the zero-forcing function of IA and beamforming to optimize the SUs transmission rates. However, this solution requires availability of CSI, and to eliminate the need for this, this work then combines the D-STBC scheme with optimal IA precoders (consisting of beamforming and zero-forcing) to maximize the SUs data rates.
APA, Harvard, Vancouver, ISO, and other styles
20

Owojaiye, Gbenga Adetokunbo. "Design and performance analysis of distributed space time coding schemes for cooperative wireless networks." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/8970.

Full text
Abstract:
In this thesis, space-time block codes originally developed for multiple antenna systems are extended to cooperative multi-hop networks. The designs are applicable to any wireless network setting especially cellular, adhoc and sensor networks where space limitations preclude the use of multiple antennas. The thesis first investigates the design of distributed orthogonal and quasi-orthogonal space time block codes in cooperative networks with single and multiple antennas at the destination. Numerical and simulation results show that by employing multiple receive antennas the diversity performance of the network is further improved at the expense of slight modification of the detection scheme. The thesis then focuses on designing distributed space time block codes for cooperative networks in which the source node participates in cooperation. Based on this, a source-assisting strategy is proposed for distributed orthogonal and quasi-orthogonal space time block codes. Numerical and simulation results show that the source-assisting strategy exhibits improved diversity performance compared to the conventional distributed orthogonal and quasi-orthogonal designs.Motivated by the problem of channel state information acquisition in practical wireless network environments, the design of differential distributed space time block codes is investigated. Specifically, a co-efficient vector-based differential encoding and decoding scheme is proposed for cooperative networks. The thesis then explores the concatenation of differential strategies with several distributed space time block coding schemes namely; the Alamouti code, square-real orthogonal codes, complex-orthogonal codes, and quasiorthogonal codes, using cooperative networks with different number of relay nodes. In order to cater for high data rate transmission in non-coherent cooperative networks, differential distributed quasi-orthogonal space-time block codes which are capable of achieving full code-rate and full diversity are proposed. Simulation results demonstrate that the differential distributed quasi-orthogonal space-time block codes outperform existing distributed space time block coding schemes in terms of code rate and bit-error-rate performance. A multidifferential distributed quasi-orthogonal space-time block coding scheme is also proposed to exploit the additional diversity path provided by the source-destination link.A major challenge is how to construct full rate codes for non-coherent cooperative broadband networks with more than two relay nodes while exploiting the achievable spatial and frequency diversity. In this thesis, full rate quasi-orthogonal codes are designed for noncoherent cooperative broadband networks where channel state information is unavailable. From this, a generalized differential distributed quasi-orthogonal space-frequency coding scheme is proposed for cooperative broadband networks. The proposed scheme is able to achieve full rate and full spatial and frequency diversity in cooperative networks with any number of relays. Through pairwise error probability analysis we show that the diversity gain of the proposed scheme can be improved by appropriate code construction and sub-carrier allocation. Based on this, sufficient conditions are derived for the proposed code structure at the source node and relay nodes to achieve full spatial and frequency diversity. In order to exploit the additional diversity paths provided by the source-destination link, a novel multidifferential distributed quasi-orthogonal space-frequency coding scheme is proposed. The overall objective of the new scheme is to improve the quality of the detected signal at the destination with negligible increase in the computational complexity of the detector.Finally, a differential distributed quasi-orthogonal space-time-frequency coding scheme is proposed to cater for high data rate transmission and improve the performance of noncoherent cooperative broadband networks operating in highly mobile environments. The approach is to integrate the concept of distributed space-time-frequency coding with differential modulation, and employ rotated constellation quasi-orthogonal codes. From this, we design a scheme which is able to address the problem of performance degradation in highly selective fading environments while guaranteeing non-coherent signal recovery and full code rate in cooperative broadband networks. The coding scheme employed in this thesis relaxes the assumption of constant channel variation in the temporal and frequency dimensions over long symbol periods, thus performance degradation is reduced in frequencyselective and time-selective fading environments. Simulation results illustrate the performance of the proposed differential distributed quasi-orthogonal space-time-frequency coding scheme under different channel conditions.
APA, Harvard, Vancouver, ISO, and other styles
21

Lúdik, Michal. "Porovnání hlasových a audio kodeků." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219793.

Full text
Abstract:
This thesis deals with description of human hearing, audio and speech codecs, description of objective measure of quality and practical comparison of codecs. Chapter about audio codecs consists of description of lossless codec FLAC and lossy codecs MP3 and Ogg Vorbis. In chapter about speech codecs is description of linear predictive coding and G.729 and OPUS codecs. Evaluation of quality consists of description of segmental signal-to- noise ratio and perceptual evaluation of quality – WSS and PESQ. Last chapter deals with description od practical part of this thesis, that is comparison of memory and time consumption of audio codecs and perceptual evaluation of speech codecs quality.
APA, Harvard, Vancouver, ISO, and other styles
22

Friedrich, Tomáš. "Komprese DNA sekvencí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237222.

Full text
Abstract:
The increasing volume of biological data requires finding new ways to save these data in genetic banks. The target of this work is design and implementation of a novel algorithm for compression of DNA sequences. The algorithm is based on aligning DNA sequences agains a reference sequence and storing only diferencies between sequence and reference model. The work contains basic prerequisities from molecular biology which are needed for understanding of algorithm details. Next aligment algorithms and common compress schemes suitable for storing of diferencies agains reference sequence are described. The work continues with a description of implementation, which is follewed by derivation of time and space complexity and comparison with common compression algorithms. Further continuation of this thesis is discussed in conclusion.
APA, Harvard, Vancouver, ISO, and other styles
23

Had, Filip. "Komprese signálů EKG nasnímaných pomocí mobilního zařízení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316832.

Full text
Abstract:
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.
APA, Harvard, Vancouver, ISO, and other styles
24

Pitzalis, Nicolas. "Plant-virus interactions : role of virus- and host-derived small non-coding RNAs during infection and disease." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAJ103.

Full text
Abstract:
Dans cette thèse, j'ai étudié le rôle des sRNAs dérivés de l'hôte et du virus lors de l'infection du colza (Brassica napus, Canola) par la souche UK1 du virus de la mosaïque du navet (TuMV-UK1). En utilisant un dérivé de TuMV fusionné avec un gène codant pour la protéine fluorescente verte (TuMV-GFP), deux cultivars de colza (‘Drakkar’ et ‘Tanto’) qui diffèrent par leur susceptibilité à ce virus ont été identifiés. Le profil transcriptionnel des foyers d'infection locale, dans les feuilles de Drakkar et de Tanto, par séquençage nouvelle génération (NGS) a révélé de nombreux gènes exprimés de manière différentielle. Les mêmes échantillons d'ARN provenant de feuilles de Drakkar et de Tanto, traitées par des virus ou utilisées en contrôle, ont également servi à établir le profil NGS des sRNAs (sRNAseq) et de leurs cibles potentielles d'ARN (PAREseq). Les analyses bioinformatiques et leur validation in vivo, ont permis d’identifier les événements de clivage de transcrits impliquant des micro ARN (miRNA) connus et encore inconnus. Fait important, les résultats indiquent que TuMV détourne la voie du RNA silencing de l’hôte avec des siRNAs issus de son propre génome (vsiRNA) pour cibler les gènes de l’hôtes. Le virus déclenche également le ciblage à grande échelle des ARN messagers (ARNm) de l’hôte par l’activation de la production de siRNAs secondaires en phase, à partir de locus PHAS. À leur tour, les vsiRNAs et les siRNAs dérivés de l'hôte (hsRNAs) ciblent et clivent l'ARN viral par le complexe RISC. Ces observations éclairent le rôle des siRNAs dérivés de l'hôte et du virus dans la coordination de l'infection virale. Un autre chapitre de cette thèse est consacré à l'analyse des maladies induites par des virus en utilisant comme modèle de plante Arabidopsis, infectée par un tobamovirus, le virus de la mosaïque du colza (ORMV). De plus, ces observations ont permis de proposer un modèle dans lequel cette guérison dépend d’un adressage important de vsiRNAs secondaires antiviraux depuis leur source de production jusqu’à leurs tissus de destination, et l'établissement d'un apport en vsiRNAs capable de bloquer l'activité VSR impliquée dans la formation des feuilles symptomatiques
In this thesis, I investigated the role of host- and virus-derived sRNAs during infection of Rapeseed (Brassica napus, Canola) by the UK1 strain of Turnip mosaic virus (TuMV-UK1). By using a TuMV derivative tagged with a gene encoding green fluorescent protein (TuMV-GFP), two rapeseed cultivars (‘Drakkar’ and ‘Tanto’) that differ in susceptibility to this virus were identified. Transcriptional profiling of local infection foci in Drakkar and Tanto leaves by next generation sequencing (NGS) revealed numerous differentially expressed genes. The same RNA samples from mock- and virus- treated Drakkar and Tanto leaves were also used for the global NGS profiling of sRNAs (sRNAseq) and their potential RNA targets (PAREseq). The bioinformatic analysis and their in vivo validation led to the identification of transcript cleavage events involving known and yet unknown miRNAs. Importantly, the results indicate that TuMV hijacks the host RNA silencing pathway with siRNAs derived from its own genome (vsiRNAs) to target host genes. The virus also triggers the widespread targeting of host messenger RNAs (mRNAs) through activation of phased, secondary siRNA production from PHAS loci. In turn, both vsiRNAs and host-derived siRNAs (hsRNAs) target and cleave the viral RNA by the RISC-mediated pathway. These observations illuminate the role of host and virus-derived sRNAs in the coordination of virus infection. Another chapter of this thesis is dedicated to the analysis of virus-induced diseases by using Arabidopsis plants infected with the Oilseed rape mosaic tobamovirus (ORMV) as a model. Initially, the infected plants develop leaves with strong disease symptoms. However, at a later stage, disease-free, “recovered” leaves start to appear. Analysis of symptoms recovery led to the identification of a mechanism in which the VSR and virus derived-siRNAs play a central role. I used Arabidopsis mutants impaired in transcriptional and post-transcriptional silencing pathways (TGS and PTGS respectively) and a plant line carrying a promoter-driven GFP transgene silenced by PTGS (Arabidopsis line 8z2). Using various techniques able to monitor virus infection, small and long viral RNA molecules, VSR activity, as well as phloem-mediated transport with in these lines, this study led to the identification of genes required for disease symptoms and disease symptom recovery. Moreover, the observations allowed to propose a model in which symptoms recovery occurs upon robust delivery of antiviral secondary vsiRNAs from source to sink tissues, and establishment of a vsiRNA dosage able to block the VSR activity involved in the formation of disease symptoms
APA, Harvard, Vancouver, ISO, and other styles
25

Krejčí, Michal. "Komprese dat." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217934.

Full text
Abstract:
This thesis deals with lossless and losing methods of data compressions and their possible applications in the measurement engineering. In the first part of the thesis there is a theoretical elaboration which informs the reader about the basic terminology, the reasons of data compression, the usage of data compression in standard practice and the division of compression algorithms. The practical part of thesis deals with the realization of the compress algorithms in Matlab and LabWindows/CVI.
APA, Harvard, Vancouver, ISO, and other styles
26

Abo, Khayal Layal. "Transkriptomická charakterizace pomocí analýzy RNA-Seq dat." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-369382.

Full text
Abstract:
Vysoce výkonné sekvenční technologie produkují obrovské množství dat, která mohou odhalit nové geny, identifikovat splice varianty a kvantifikovat genovou expresi v celém genomu. Objem a složitost dat z RNA-seq experimentů vyžadují škálovatelné metody matematické analýzy založené na robustníchstatistických modelech. Je náročné navrhnout integrované pracovní postupy, které zahrnují různé postupy analýzy. Konkrétně jsou to srovnávací testy transkriptů, které jsou komplikovány několika zdroji variability měření a představují řadu statistických problémů. V tomto výzkumu byla sestavena integrovaná transkripční profilová pipeline k produkci nových reprodukovatelných kódů pro získání biologicky interpretovovatelných výsledků. Počínaje anotací údajů RNA-seq a hodnocení kvality je navržen soubor kódů, který slouží pro vizualizaci hodnocení kvality, potřebné pro zajištění RNA-Seq experimentu s analýzou dat. Dále je provedena komplexní diferenciální analýza genových expresí, která poskytuje popisné metody pro testované RNA-Seq data. Pro implementaci analýzy alternativního sestřihu a diferenciálních exonů jsme zlepšili výkon DEXSeq definováním otevřeného čtecího rámce exonového regionu, který se používá alternativně. Dále je popsána nová metodologie pro analýzu diferenciálně exprimované dlouhé nekódující RNA nalezením funkční korelace této RNA se sousedícími diferenciálně exprimovanými geny kódujícími proteiny. Takto je získán jasnější pohled na regulační mechanismus a poskytnuta hypotéza o úloze dlouhé nekódující RNA v regulaci genové exprese.
APA, Harvard, Vancouver, ISO, and other styles
27

Ondra, Josef. "Komprese signálů EKG s využitím vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217209.

Full text
Abstract:
Signal compression is daily-used tool for memory capacities reduction and for fast data communication. Methods based on wavelet transform seem to be very effective nowadays. Signal decomposition with a suitable bank filters following with coefficients quantization represents one of the available technique. After packing quantized coefficients into one sequence, run length coding together with Huffman coding are implemented. This thesis focuses on compression effectiveness for the different wavelet transform and quantization settings.
APA, Harvard, Vancouver, ISO, and other styles
28

Arnison, Matthew Raphael. "Phase control and measurement in digital microscopy." University of Sydney. Physics, 2003. http://hdl.handle.net/2123/569.

Full text
Abstract:
The ongoing merger of the digital and optical components of the modern microscope is creating opportunities for new measurement techniques, along with new challenges for optical modelling. This thesis investigates several such opportunities and challenges which are particularly relevant to biomedical imaging. Fourier optics is used throughout the thesis as the underlying conceptual model, with a particular emphasis on three--dimensional Fourier optics. A new challenge for optical modelling provided by digital microscopy is the relaxation of traditional symmetry constraints on optical design. An extension of optical transfer function theory to deal with arbitrary lens pupil functions is presented in this thesis. This is used to chart the 3D vectorial structure of the spatial frequency spectrum of the intensity in the focal region of a high aperture lens when illuminated by linearly polarised beam. Wavefront coding has been used successfully in paraxial imaging systems to extend the depth of field. This is achieved by controlling the pupil phase with a cubic phase mask, and thereby balancing optical behaviour with digital processing. In this thesis I present a high aperture vectorial model for focusing with a cubic phase mask, and compare it with results calculated using the paraxial approximation. The effect of a refractive index change is also explored. High aperture measurements of the point spread function are reported, along with experimental confirmation of high aperture extended depth of field imaging of a biological specimen. Differential interference contrast is a popular method for imaging phase changes in otherwise transparent biological specimens. In this thesis I report on a new isotropic algorithm for retrieving the phase from differential interference contrast images of the phase gradient, using phase shifting, two directions of shear, and non--iterative Fourier phase integration incorporating a modified spiral phase transform. This method does not assume that the specimen has a constant amplitude. A simulation is presented which demonstrates good agreement between the retrieved phase and the phase of the simulated object, with excellent immunity to imaging noise.
APA, Harvard, Vancouver, ISO, and other styles
29

Almeida, João Paulo Pereira de. "O transcritoma antisense primário de Halobacterium salinarum NRC-1." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/17/17131/tde-15012019-101127/.

Full text
Abstract:
Em procariotos, RNAs antisense (asRNAs) constituem a classe de RNAs não codificantes (ncRNAs) mais numerosa detectada por métodos de avaliação de transcritoma em larga escala. Apesar da grande abundância, pouco se sabe sobre mecanismos regulatórios e aspectos da conservação evolutiva dessas moléculas, principalmente em arquéias, onde o mecanismo de degradação de RNAs dupla fita (dsRNAs) é um fenômeno pouco conhecido. No presente estudo, utilizando dados de dRNA-seq, identificamos 1626 inícios de transcrição primários antisense (aTSSs) no genoma de Halobacterium salinarum NRC-1, importante organismo modelo para estudos de regulação gênica no domínio Archaea. Integrando dados de expressão gênica obtidos a partir de 18 bibliotecas de RNA-seq paired-end, anotamos 846 asRNAs a partir dos aTSSs mapeados. Encontramos asRNAs em ~21% dos genes anotados, alguns desses relacionados a importantes características desse organismo como: codificadores de proteínas que constituem vesículas de gás e da proteína bacteriorodopsina, além de vários genes relacionados a maquinaria de tradução e transposases. Além desses, encontramos asRNAs em genes pertencentes a sistemas de toxinas-antitoxinas do tipo II e utilizando dados públicos de dRNA-seq, evidenciamos que esse é um fenômeno que ocorre em bactérias e arquéias. A interação de um ncRNA com seu RNA alvo pode ser dependente de proteínas, em arquéias, a proteína LSm é uma chaperona de RNA homóloga a Hfq de bactérias, implicada no controle pós-transcricional. Utilizamos dados de RIP-seq de RNAs imunoprecipitados com LSm e identificamos 91 asRNAs interagindo com essa proteína, para 81 desses, o mRNA do gene sense também foi encontrado interagindo. Buscando por aTSSs presentes nas mesmas regiões de genes ortólogos, identificamos 160 aTSSs que dão origem a asRNAs em H. salinarum possivelmente conservados em Haloferax volcanii. A expressão dos asRNAs anotados foi avaliada ao longo de uma curva de crescimento e em uma linhagem knockout de um gene que codifica uma RNase R, possível degradadora de dsRNAs em arquéias. Encontramos um total de 144 asRNAs diferencialmente expressos ao longo da curva de crescimento, para 56 desses o gene sense também está diferencialmente expresso, caracterizando possíveis mecanismos de regulação em cis por esses RNAs. Na linhagem knockout, encontramos cinco asRNAs diferencialmente expressos e apenas para um desses o gene sense também está diferencialmente expresso, resultado que não nos permitiu inferir um possível papel de degradação de dsRNAs da RNAse R em H. salinarum NRC-1. Nesse trabalho apresentamos um mapeamento completo do transcritoma antisense primário de H. salinarum NRC-1 com resultados que consistem em um importante passo na direção da compreensão do envolvimento da transcrição antisense na regulação gênica pós-transcricional desse organismo modelo do terceiro domínio da vida.
Antisense RNAs (asRNAs) constitute the most numerous class of non-coding RNAs (ncRNAs) detected by transcriptome highthroughput methods in prokaryotes. Despite this abundance, little is known about regulatory mechanisms and evolutionary aspects of these molecules, mainly in archaea, where the mechanism of double-strand RNA (dsRNA) degradation remains poorly understood. In this study, using dRNA-seq data, we identified 1626 antisense transcription start sites (aTSSs) in the genome of Halobacterium salinarum NRC-1, an important model organism for gene expression regulation studies in Archaea. By integrating gene expression data from 18 RNA-seq paired-end libraries, we were able to annotate 846 asRNAs from mapped aTSSs. We found asRNAs in ~21% of annotated genes including genes related to important characteristics of this organism, such as: gas vesicle proteins, bacteriorhodopsin, translation machinery and transposases. We also found asRNAs in type II toxin-antitoxin systems and using public dRNA-seq data, we show evidences that this phenomenon might be conserved in archaea and bacteria. The interaction of a ncRNA with its target may depend on intermediary proteins action. In archaea, the LSm protein is a RNA chaperone homologous to bacterial Hfq, involved in post-transcriptional regulation. We used RIP-seq data from RNAs immunoprecipitated with LSm and identified 91 asRNAs interacting with this protein, for 81 of these the mRNA of the sense gene is also interacting. We searched for aTSSs present in the same region of orthologous genes in the Haloferax volcanii. We found 160 aTSSs that originated asRNAs in H. salinarum NRC-1 that might be conserved in this two archaea. The expression of annotated asRNAs was analyzed over a growth curve and in a knockout strain for RNase R gene. We found 144 asRNA differentially expressed over the growth curve, for 56 of these the sense gene was also differentially expressed, characterizing possible cis regulators asRNAs. In the knockout strain we found five differentially expressed asRNAs and only one asRNA/gene pair, this result does not allow us to infer a dsRNA degradation in vivo activity for this RNase in H. salinarum NRC- 1. This work contributes to the discovery of the antisense transcriptome in H. salinarum NRC- 1 a relevant step to uncover the post-transcriptional gene regulatory network in this archaeon.
APA, Harvard, Vancouver, ISO, and other styles
30

Štys, Jiří. "Implementace statistických kompresních metod." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-413295.

Full text
Abstract:
This thesis describes Burrow-Wheeler compression algorithm. It focuses on each part of Burrow-Wheeler algorithm, most of all on and entropic coders. In section are described methods like move to front, inverse frequences, interval coding, etc. Among the described entropy coders are Huffman, arithmetic and Rice-Golomg coders. In conclusion there is testing of described methods of global structure transformation and entropic coders. Best combinations are compared with the most common compress algorithm.
APA, Harvard, Vancouver, ISO, and other styles
31

Chang, Chih-Peng, and 張志鵬. "Segmented Vertex Chain Coding with Huffman Coding." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/56936669407017461858.

Full text
Abstract:
碩士
朝陽科技大學
資訊工程系碩士班
96
To significantly decrease the amount of information while still preserving the contour shape, chain coding is widely applied to digital images analysis, especially to those raster-shaped ones. In this paper, chain coding is integrated with the Single-side Grown Huffman Table (SGHT) to improve the data compression rate.
APA, Harvard, Vancouver, ISO, and other styles
32

Baltaji, Najad Borhan. "Scan test data compression using alternate Huffman coding." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5615.

Full text
Abstract:
Huffman coding is a good method for statistically compressing test data with high compression rates. Unfortunately, the on-­‐chip decoder to decompress that encoded test data after it is loaded onto the chip may be too complex. With limited die area, the decoder complexity becomes a drawback. This makes Huffman coding not ideal for use in scan data compression. Selectively encoding test data using Huffman coding can provide similarly high compression rates while reducing the complexity of the decoder. A smaller and less complex decoder makes Alternate Huffman Coding a viable option for compressing and decompressing scan test data.
text
APA, Harvard, Vancouver, ISO, and other styles
33

Zheng, Li-Wen, and 鄭力文. "Personalize metro-style user interface by Huffman coding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/rdxj52.

Full text
Abstract:
碩士
國立臺灣大學
工程科學及海洋工程學研究所
105
In order to satisfy the needs of visual information for user interface, Microsoft proposed the metro UI design with its dynamic bricks to show the importance of the various functions in operating system and got lots of attention, so that many websites have begun to follow this concept designing their user interface. This research uses this concept to build the new user interface with the Web portal and dynamically present usage requirements. However, the size of the current dynamic brick and the layout are required to be set in advance, and it has no effective method for automatic calculation. In this study, an automated Metro UI user interface algorithm is proposed to automatically calculate the dynamic brick size and layout based on the user''s usage frequency of the system functions through Huffman coding. The experimental results show that the method of dynamic Metro UI, which is dynamically presented in this research, is more suitable for the user to adjust the operation experience according to the different needs compared with the traditional static and fixed size Metro UI.
APA, Harvard, Vancouver, ISO, and other styles
34

Γρίβας, Απόστολος. "Μελέτη και υλοποίηση αλγορίθμων συμπίεσης." Thesis, 2011. http://nemertes.lis.upatras.gr/jspui/handle/10889/4336.

Full text
Abstract:
Σ΄αυτή τη διπλωματική εργασία μελετάμε κάποιους αλγορίθμους συμπίεσης δεδομένων και τους υλοποιούμε. Αρχικά, αναφέρονται βασικές αρχές της κωδικοποίησης και παρουσιάζεται το μαθηματικό υπόβαθρο της Θεωρίας Πληροφορίας. Παρουσιάζονται, επίσης διάφορα είδη κωδικών. Εν συνεχεία αναλύονται διεξοδικά η κωδικοποίηση Huffman και η αριθμητική κωδικοποίηση. Τέλος, οι δύο προαναφερθείσες κωδικοποιήσεις υλοποιούνται σε υπολογιστή με χρήση γλώσσας προγραμματισμού C και χρησιμοποιούνται για τη συμπίεση αρχείων κειμένου. Τα αρχεία που προκύπτουν συγκρίνονται με αρχεία που έχουν συμπιεστεί με χρήση προγραμμάτων του εμπορίου, αναλύονται τα αίτια των διαφορών στην αποδοτικότητα και εξάγονται χρήσιμα συμπεράσματα.
In this thesis we study some data compression algorithms and implement them. The basic principles of coding are mentioned and the mathematical foundation of information theory is presented. Also different types of codes are presented. Then the Huffman coding and arithmetic coding are analyzed in detail. Finally, the two codings are implemented on a computer using the C programming language in order to compress text files. The resulting files are compared with files that are compressed using commercial programmes, the causes of differences in the efficiency are analyzed and useful conclusions are drawn.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Ruey-Jen, and 王瑞禎. "On the Design and VLSI architecture for Dynamic Huffman Coding." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/98190957224448119235.

Full text
Abstract:
碩士
國立成功大學
電機工程研究所
82
Huffman coding is a lossless data compression technique that achieves compact data representation by taking advantage of the statistical characteristic of the source. It is widely used in many various data compression applications , such as high definition television , disk operation system , video coding , and large data communication ....。 Dynamic huffman coding (DHC) can compress any data file without preview. Compared with the Adaptive huffman coding , the DHC method requires a fewer memories and needs no side informations . Compared with the static huffman coding , the DHC method achieves a better compression ratio。 In this papper , the modified algorithm and the CAM_based architectures for DHC have been presented . The output thoughput of the encoder is 1bit/cycle . Based on the architecture , the DHC encoder chip is implemented . The chip has gate count of 17652 and die area of 4.8mm*4.8mm by using TSMC 0.8um spdm process . From the result of timing analysis , the work frequency is about 20 Mhz。
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, Ya-Chen, and 黃雅臻. "Efficient Test Pattern Compression Techniques Based on Complementary Huffman Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/93893687721312782837.

Full text
Abstract:
碩士
輔仁大學
電子工程學系
97
In this thesis, complementary Huffman encoding techniques are proposed for test data compression of complex SOC designs during manufacturing testing. The correlations of blocks of bits in a test data set are exploited such that more test blocks can share the same codeword. Therefore, besides the compatible blocks used in previous works, the complementary property betweens test blocks can also be used. Based on this property, two algorithms are proposed for Huffman encoding. According to these techniques, more test blocks can share the same codeword and the size of the Huffman tree can be reduced. This will not only reduce the area overhead of the decoding circuitry but also substantially increase the compression ratio. In order to facilitate the proposed complementary encoding techniques, a don’t-care assignment algorithm is also proposed. According to experimental results, the area overhead of the decompression circuit is lower than that of the full Huffman coding technique. Moreover, the compression ratio is higher than that of the selective and optimal selective Huffman coding techniques proposed in previous works.
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Chia-Wei, and 劉家維. "A Dynamic Huffman Coding Method for TLC NAND Flash Memory." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ujktq2.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
107
Recently, NAND flash memory has gradually replaced the traditional Hard-Disk Drives and become the most mainstream storage device. NAND flash memory has many advantages such as non-volatile features, small size, low-power consumption, fast-access speed, and shock resistance, etc. With the advance of the process, NAND flash memory has evolved from single-level cell (SLC) and multi-level cell (MLC) into triple-level cell (TLC) or even quad-level cell (QLC). Although NAND flash memory has many advantages, it also has many physical problems such as the characteristic of erase-before-write, the limitation of P/E Cycles, etc. Moreover, TLC NAND flash memory has the problems of low reliability and short lifetime. Thus, we propose a dynamic Huffman coding method, which can apply to the write operation of NAND flash memory. Our method can select a suitable type of Huffman coding for different kinds of data dynamically and improve the VTH distribution of NAND flash memory to reduce the bit-error-rate and improve the reliability of NAND flash memory.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Che-Lung, and 林哲論. "Bounded Error Huffman Coding in Applications of Wireless Sensor Networks." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/61263794337918009117.

Full text
Abstract:
碩士
國立臺灣大學
工程科學及海洋工程學研究所
102
The measurement error in realistic application for WSN exists due to hardware limitation and application conditions. On the other hand, how to prolong the life time of WSN is an important issue due to the limitation of battery capacity in sensors. In previous research, both Bounded Error Data Compression (BEC) and Improved Bounded Error Data Compression (IBEC), used the bounded error in data compression under the condition of allowing data error to reduce the power consumption in WSN. Unlike BEC and IBEC, Bounded Error Huffman Coding (BEHC) proposed in this thesis uses the bounded error in Huffman coding. In data correlation compression, the compression ratio would be improved by that avoiding the excess bit composing the code and eliminating the defect of compressing the data under bounded error to longer code. In addition, after the research and observation of IBEC in this thesis, it shows that the data format and spatial correlation compression proposed by IBEC still has the defect. Therefore, New Improved Bounded Error Data Compression (NIBEC) which uses BEHC in off-line would be proposed to improve the data format and spatial correlation compression for higher effectiveness of compression. In experiment result, four type raw data which have different correlation would be experimented and the results would compare with IBEC. The result shows that NIBEC improved 27%~47% compression ratio and reduced 25%~43% power consumption, and it proved that NIBEC improved the compression effectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Sze Ching, and 陳思靜. "A Line-Based Lossless Display Frame Compressor Using Huffman Coding and Longest Prefix Match Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/75028784547709858315.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
104
We propose a lossless video frame compression algorithm employing a dictionary coding method, the Huffman coding method and three schemes to achieve high compression ratio. We observe the smaller the absolute value of the differentials between the current pixel and its neighbors the higher the probability is by analyzing the distribution of this differentials. According to this distribution, we compute the data reduction ratio (DRR) for cases using different numbers of code words and find the more code words used the higher the DRR which approached a plateau. Considering memory usage, we choose a suitable number of code words for Huffman encoding. We employ a two-staged classification (TC) scheme consisting of the dictionary coding method and a longest prefix match (LPM) method. The LPM method we choose for each pixel group a best truncation length (BTL) using an adaptive prefix bit truncation (APBT) scheme. We further compress the code words by a head code compression (HCC) scheme. Due to large numbers of code words used, we can achieve about 0.5% more bit rate reduction compared to previous proposed algorithm and only 0.96% bit rate reduction less than using the maximum dictionary size.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Chih Han, and 李致翰. "A Framework for EEG Compression with Compressive Sensing and Huffman Coding." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/28431597266494445081.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
103
Compressive sensing (CS) is an emerging technique for data compression in recent years. In this thesis, it is used to compress electroencephalogram (EEG) signals. CS includes two major principles. The one is the sparsity, and the other is incoherence. However, the EEG signal is not sparse enough. Thus, CS can only recover the compressed EEG signals in low compression ratios. Under high compression ratios, the recovery of compressed EEG signals fails after the compression. The compression ratios where EEG can be reconstructed with high quality is not high enough to let the system become energy-efficient, so the compression will be not meaningful. Thus, we want to find a solution to make CS become practical in compressing EEG signals when high compression ratios are adopted. From surveying literatures, the approaches to increase performance in CS can be separated into three classes. First, design a more strong reconstruction algorithm. Second, find a dictionary where the EEG signals can have sparse presentation in such transform domain. Lastly, combine the CS with other compression techniques. Here we take the first and third approaches to achieve the goal. First of all, we proposed a modified iterative pseudo-inverse multiplication (MIPIM) with the complexity O(KMN) where M is the dimension of the measurements, N is the dimension of the signal, and K is the sparse level. This complexity is lower than the most existing algorithms. Next, we extend MIPIM into a multiple measurements (MMV) algorithm. It is called as simultaneously MIPIM (SMIPIM). This aims at recovering all channel signals at the same time and taking the correlation among channels to increase performance. The SMIPIM can reduce normalized mean square error (NMSE) by 0.06 comparing with the classical algorithms in CS. For the part of combining the CS with other compression techniques, we adopt an existing framework which takes an information from server or receiver node to combine CS and Huffman coding efficiently. The framework was proposed to increase the compression to apply to the telemedicine with EEG signals, but we found a shortcoming. It takes a long computational time on running the algorithm which produces information. It will make the instant telemedicine unavailable because sensors can not transmit data until the information are received. Therefore, we propose an algorithm to replace the existing one. The complexity changes from O(L^5) to O(L^2) where L is the number of channels. In our experiment, our algorithm is faster 10^5 times than the existing one. Finally, we carried out the simulation of entire system. We simulated the framework with our proposed algorithm for computing the information of correlation of channels and our SMIPIM for reconstruction. In a compression ratio 3 : 1, the NMSE is 0.0672 , and the original CS framework with Block Sparse Bayesian Learning Bound Optimization (BSBLBO) is 0.1554. On the other hand, depending on the minimum acceptable NMSE which is 0.09 for EEG signals, we have a compression ratio 0.31. Moreover, we take the compression ratio to estimate how many channels we can transmit in a fixed transmission bandwidth. The result shows that the number of channels can increase 16 with Bluetooth 2.0 and 35 with ZigBee for wireless transmission after the work.
APA, Harvard, Vancouver, ISO, and other styles
41

Tung, Chi-Yang, and 董啟揚. "A New Method of Image Compression by Wavelet Combining Huffman Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/97960665547728924728.

Full text
Abstract:
碩士
中原大學
電機工程研究所
104
This study proposes a new method of image compression by wavelet combining Huffman coding in order to reduce storage space, increase transmission speed and make the image quality better. In this thesis, we propose a new method of image compression by introducing Huffman coding. First, we implement image compression by wavelet transform. In this study the wavelet transform is used as our original case and the wavelet combining Huffman coding is as our improvement case. Second, we make the image after the wave let transform to be encode by Huffman coding. Third, we make simulation of image compression by MATLAB. Then we compress color images and gray-level images to calculate the quality of compressed images by PSNR (peak signal-to-noise ratio) value. According to our simulations, the performance of wavelet combining Huffman coding will be significantly better than wavelet transform. The performance also achieves our ideal requirement. In this study, the results of our research are as follows: 1.Reduce Storage Space In this study, we use Huffman coding to encode the image has compressed. It can reduce storage space by using wavelet transform combining Huffman coding. 2.Increase Transmission Speed Due to using Huffman coding to encode the image, we can figure out the file of image is comparatively small than image file of original case. If file image is more small, it is beneficial to increase transmission speed. 3.make the image quality better In this study, we calculate the quality of compressed images by PSNR (peak signal-to-noise ratio) value. So we can know not only image compressed smaller but also better quality
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Jyun-Ruei, and 李俊叡. "The Study of RFID Authentication Schemes Using Message Authentication and Huffman Coding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/98965119282757619915.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
97
As the RFID technology becomes mature and its manufacturing cost is reduced constantly, this technology has already widely used in a lot of field such as supply chain management, entrance guard system, intelligent home appliances, electronic payment, production automation, etc. While RFID technology brings enormous commercial value, and the usage is simple and convenient, it threatens security and privacy of individuals and organizations. In this thesis, we introduce the problem in privacy and security. And then we proposed a new idea. We use Huffman coding to encode the tag ID. And we use the hash function to augment the data security. Our scheme provides each RFID tag to emit a pseudonym when receiving every reader's query. Therefore, it makes tracking activities and personal preferences of tag's owner impractical to provide the user's privacy. In addition, our proposed scheme provides not only high-security but also high-e±ciency.
APA, Harvard, Vancouver, ISO, and other styles
43

Hussain, Fatima Omman. "Indirect text entry interfaces based on Huffman coding with unequal letter costs /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2008. Graduate Programme in Science.
Typescript. Includes bibliographical references (leaves 223-232). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45965
APA, Harvard, Vancouver, ISO, and other styles
44

Syu, Wei-Jyun, and 許瑋峻. "A Study of Reversible Data Hiding Based on SMVQ and Huffman Coding." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/5uj776.

Full text
Abstract:
碩士
國立虎尾科技大學
資訊工程研究所
101
The data hiding technology not only embed the high-payload secret data into digital image, but also could reconstruct the original cover image at the receiver. “A reversible and high-payload data hiding scheme implemented in the SMVQ compression domain of image” is proposed in this thesis. The idea of this scheme is to hide secret data into the compression codes of image by utilizing the sorted state codebook of SMVQ. The compression codes of image reversibly reconstructed original VQ-compressed cover image in the proposed scheme. Besides, the Huffman coding technique is applied to compact the volume of the overall data needed to be transmitted. The proposed scheme significantly enhances VQ-compressed technique and achieves high embedding capacity. Eventually, the experimental results show that the proposed scheme maintains good visual quality for the reconstructed original VQ-compressed cover image. Moreover, the proposed scheme achieves the best performance among approach in literature with the average low bit rate and high embedding rate.
APA, Harvard, Vancouver, ISO, and other styles
45

Chung, Wei-Sheng, and 鍾偉勝. "Proof of Violation with Adaptive Huffman Coding Hash Tree for Cloud Storage Service." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/j622gu.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系
106
Although cloud storage services are very popular nowadays, users have a problem that they do not have an effective way to prove the system is abnormal due to system errors. Users thus cannot claim a loss even when data or files are damaged by some kinds of internal errors. As a result, enterprise users often do not trust or even adopt cloud storage services due to the above-mentioned problem. We intend to design methods to solve the problem of cloud storage services. In this paper, we focus on the research of Proof of Violation (POV). All the updated files in cloud storage will be signed by users and service providers with digital signature, and their hash values are checked for detecting the occurrence of violations to ensure that both parties can trust each other. We propose Adaptive Huffman Coding Hash Tree Construction (AHCHTC) algorithm for the real-time POV of cloud storage services. The proposed algorithm dynamically adds and adjusts hash tree nodes according to the update counters of files. It consumes less execution time and memory space than an existing hash tree-based POV scheme. We further propose Adaptive Huffman Coding Hash Tree Construction/Counter Adjustment (AHCHTC/CA) algorithm to improve the AHCHTC algorithm by adjusting counters of all nodes associated with files while maintaining the hash tree structure to satisfy the sibling property. Thus, the AHCHTC/CA algorithm constructs the hash tree according to recent update counters, instead of total update counters. This can reflect recent file update patterns, and thus further improve the performance. Simulation experiments are conducted to evaluate the performance of the proposed algorithms is for the web page update patterns in NCUCCWiki and the file update patterns in the Harvard University network file system provided by SNIA (Storage Networking Industry Association) IOTTA (Input/output Traces, Tools, and Analysis) data set. The evaluated performance is compared with that of a related method. The comparisons show that the proposed algorithms are superior to the related method in terms of the computation time and the memory overhead. We also show some observations for the experimental results and possible application scenarios of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
46

Kuo, Chung-Wei, and 郭崇韋. "Wireless-LAN Differential Source Coding." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/87720363285216673969.

Full text
Abstract:
碩士
逢甲大學
通訊工程所
93
The e-medical clothing integrates physiological signal sensors, a wireless data transmission module, an analyzing software on PDA or PC, and a power supply module on a specially designed clothing platform which make it a very power medical monitoring equipment. The wireless data transmission module uses radio frequency technology, in our case the Bluetooth technology, to improve the convenience of patients who need their physiological signals monitored on a regular and long term bases. Patients can move freely inside the coverage area of Bluetooth systems without any constraint where medical personnel can effectively retrieve real-time physiological data with no interference to patients. In this thesis, we will describe how to transmit physiological data on Bluetooth modules and use differential encoding techniques to compress the data before transmission to reduce the power consumption of transmissions and in turn prolong battery lifespan on e-medical clothings.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Yen-Lin, and 黃彥菱. "Entropy-based Differential Chain Coding." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/93338243002363491929.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
89
A simple but efficient technique for encoding object contour is presented. It is based on the chain coding representation and utilizes the benefit of differential chain coding (DCC) and entropy coding. DCC let the occurrences of symbols skewed. And the entropy coding let the frequently used symbols correspond to the short codes. This thesis proposes a simple DCC-based method to adaptively trace the contour direction locally and effectively select the code symbol accordingly. The method really alters the distribution of symbol occurrences to the extreme case, thus the gain of entropy code becomes significant. We conducted several experiments to compare with DCC with entropy coding and MPEG-4 shape coder. The experimental results show that this simple method is constantly better than DCC and comparable with the MPEG-4 shape code.
APA, Harvard, Vancouver, ISO, and other styles
48

Haung, Chan-Hao, and 黃展浩. "Improving the input speed of a multi-key prosthetic keyboard based on Huffman coding." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/11577293147455365349.

Full text
Abstract:
碩士
國立暨南國際大學
資訊工程學系
99
In a conventional keyboard, like a QWERTY keyboard, there are too many keys such that the spaces between neighboring keys are too small for physical disabled. In this study we propose a novel prosthetic keyboard with reduced number of keys such that the space between neighboring keys is sufficient for physically disabled. Given only 12 keys in the designed keyboard, multiple keystrokes are required for inputting a character. Each character in encoded by using radix-12 Huffman algorithm. The code set of each character is determined by its appearance frequency in a typical typing task. The higher appearance frequency of a character, the shorter its code set. Experiments with a subject with cerebral palsy showed that the average code length of all characters is 1.48 keys per character. Given the codes sets, this study further propose a method to find the optimal keyboard arrangement using Particle Swarm Optimization (PSO) algorithm. Given the appearance frequency of each key in a typical typing task, the objective function is based on the total time required for the subject to press the keys. The optimal keyboard arrangement is one that minimizes the objective function using PSO algorithm. Experiments were conducted to compare the performances of three different input methods, including the proposed Huffman method, the dual key method, and a 6-key Mose Code method. The Mose Code input method has been used by the subject for years. A commonly-used typing speed test software was used to record the typing speed of the subject. Results showed that the proposed Huffman method can help the subject to achieve more words per minutes than other two methods.
APA, Harvard, Vancouver, ISO, and other styles
49

HUANG, HSIN-HAN, and 黃信翰. "Hardware Implementation of a Real Time Lossless ECG Signal Compressor with Improved Multi-Stage Huffman Coding." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/58420664973565933749.

Full text
Abstract:
碩士
輔仁大學
電機工程學系碩士班
105
Electrocardiogram (ECG) monitoring systems are widely used in healthcare and telemedicine. The ECG signals must be compressed to enable efficient transmission and storage. In addition, real time monitoring is required. It is challenging to meet real time requirements and transmission bandwidth limit. In this paper, we propose hardware implementation of a real time lossless ECG signal compressor. Modified error predictor and multi-stage Huffman encoding algorithm are proposed. Without sacrificing hardware cost, we can use a two-stage encoding tables to realize multi-stage encoding, which has better compression efficiency. We implemented the lossless compressor hardware on an ARM-based FPGA platform. Experiments to evaluate MIT-BIH database show that the proposed work attain comparable compression performance and allow the real time data transmission under Bluetooth environment.
APA, Harvard, Vancouver, ISO, and other styles
50

Lai, Po-Yueh, and 賴柏岳. "Opus 編碼器中 Range Encoding 與 Huffman Coding 壓縮效率之比較." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2rrkgd.

Full text
Abstract:
碩士
國立臺北科技大學
資訊工程系所
105
Nowadays, streaming is the particular way to listen to the digital music online. People use the MP3 and AAC format in the past but the MP3 format is retired gradually in recent. Now there are lots of digital audio format in streaming technology and one of them is Opus Codec.   In this thesis, we study the CELT Layer in Opus Codec. Use the Huffman Coding in MP3 and AAC to replace the original method PVQ and Range Encoding in CELT. Through this experiment, we can know the compression efficiency between the Range Encoding and Huffman Coding.   We let this experiment separate into two parts. First, is obtaining the data from the source file. Second, code these data in MP3’s and AAC’s Huffman Coding method respectively and compare this method’s difference with the original method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography