To see the other types of publications on this topic, follow the link: Discrete wavelet transform (DWT).

Dissertations / Theses on the topic 'Discrete wavelet transform (DWT)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Discrete wavelet transform (DWT).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Johansson, Gustaf. "Scalable video coding using the Discrete Wavelet Transform : Skalbar videokodning med användning av den diskreta wavelettransformen." Thesis, Linköping University, Information Coding, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54637.

Full text
Abstract:

A method for constructing a highly scalable bit stream for video coding is presented in detail and implemented in a demo application with a GUI in the Windows Vista operating system.

The video codec uses the Discrete Wavelet Transform in both spatial and temporal directions together with a zerotree quantizer to achieve a highly scalable bit stream in the senses of quality, spatial resolution and frame rate.


I detta arbete presenteras en metod för att skapa en mycket skalbar videoström. Metoden implementeras sedan i sin helhet i programspråken C och C++ med ett grafiskt användargränssnitt på operativsystemet Windows Vista.

I metoden används den diskreta wavelettransformen i såväl de spatiella dimensionerna som tidsdimensionen tillsammans med en nollträdskvantiserare för att åstakomma en skalbar videoström i avseendena bildkvalitet, skärmupplösning och antal bildrutor per sekund.

APA, Harvard, Vancouver, ISO, and other styles
2

Boland, Simon Daniel. "High quality audio coding using the wavelet transform." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abdallah, Abdallah Sabry. "Investigation of New Techniques for Face detection." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/33191.

Full text
Abstract:
The task of detecting human faces within either a still image or a video frame is one of the most popular object detection problems. For the last twenty years researchers have shown great interest in this problem because it is an essential pre-processing stage for computing systems that process human faces as input data. Example applications include face recognition systems, vision systems for autonomous robots, human computer interaction systems (HCI), surveillance systems, biometric based authentication systems, video transmission and video compression systems, and content based image retrieval systems. In this thesis, non-traditional methods are investigated for detecting human faces within color images or video frames. The attempted methods are chosen such that the required computing power and memory consumption are adequate for real-time hardware implementation. First, a standard color image database is introduced in order to accomplish fair evaluation and benchmarking of face detection and skin segmentation approaches. Next, a new pre-processing scheme based on skin segmentation is presented to prepare the input image for feature extraction. The presented pre-processing scheme requires relatively low computing power and memory needs. Then, several feature extraction techniques are evaluated. This thesis introduces feature extraction based on Two Dimensional Discrete Cosine Transform (2D-DCT), Two Dimensional Discrete Wavelet Transform (2D-DWT), geometrical moment invariants, and edge detection. It also attempts to construct a hybrid feature vector by the fusion between 2D-DCT coefficients and edge information, as well as the fusion between 2D-DWT coefficients and geometrical moments. A self organizing map (SOM) based classifier is used within all the experiments to distinguish between facial and non-facial samples. Two strategies are tried to make the final decision from the output of a single SOM or multiple SOM. Finally, an FPGA based framework that implements the presented techniques, is presented as well as a partial implementation. Every presented technique has been evaluated consistently using the same dataset. The experiments show very promising results. The highest detection rate of 89.2% was obtained when using a fusion between DCT coefficients and edge information to construct the feature vector. A second highest rate of 88.7% was achieved by using a fusion between DWT coefficients and geometrical moments. Finally, a third highest rate of 85.2% was obtained by calculating the moments of edges.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Jassim, Taha D. "Combined robust and fragile watermarking algorithms for still images. Design and evaluation of combined blind discrete wavelet transform-based robust watermarking algorithms for copyright protection using mobile phone numbers and fragile watermarking algorithms for content authentication of digital still images using hash functions." Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/6460.

Full text
Abstract:
This thesis deals with copyright protection and content authentication for still images. New blind transform domain block based algorithms using one-level and two-level Discrete Wavelet Transform (DWT) were developed for copyright protection. The mobile number with international code is used as the watermarking data. The robust algorithms used the Low-Low frequency coefficients of the DWT to embed the watermarking information. The watermarking information is embedded in the green channel of the RGB colour image and Y channel of the YCbCr images. The watermarking information is scrambled by using a secret key to increase the security of the algorithms. Due to the small size of the watermarking information comparing to the host image size, the embedding process is repeated several times which resulted in increasing the robustness of the algorithms. Shuffling process is implemented during the multi embedding process in order to avoid spatial correlation between the host image and the watermarking information. The effects of using one-level and two-level of DWT on the robustness and image quality have been studied. The Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index Measure (SSIM) and Normalized Correlation Coefficient (NCC) are used to evaluate the fidelity of the images. Several grey and still colour images are used to test the new robust algorithms. The new algorithms offered better results in the robustness against different attacks such as JPEG compression, scaling, salt and pepper noise, Gaussian noise, filters and other image processing compared to DCT based algorithms. The authenticity of the images were assessed by using a fragile watermarking algorithm by using hash function (MD5) as watermarking information embedded in the spatial domain. The new algorithm showed high sensitivity against any tampering on the watermarked images. The combined fragile and robust watermarking caused minimal distortion to the images. The combined scheme achieved both the copyright protection and content authentication.
APA, Harvard, Vancouver, ISO, and other styles
5

Jassim, Taha Dawood. "Combined robust and fragile watermarking algorithms for still images : design and evaluation of combined blind discrete wavelet transform-based robust watermarking algorithms for copyright protection using mobile phone numbers and fragile watermarking algorithms for content authentication of digital still images using hash functions." Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/6460.

Full text
Abstract:
This thesis deals with copyright protection and content authentication for still images. New blind transform domain block based algorithms using one-level and two-level Discrete Wavelet Transform (DWT) were developed for copyright protection. The mobile number with international code is used as the watermarking data. The robust algorithms used the Low-Low frequency coefficients of the DWT to embed the watermarking information. The watermarking information is embedded in the green channel of the RGB colour image and Y channel of the YCbCr images. The watermarking information is scrambled by using a secret key to increase the security of the algorithms. Due to the small size of the watermarking information comparing to the host image size, the embedding process is repeated several times which resulted in increasing the robustness of the algorithms. Shuffling process is implemented during the multi embedding process in order to avoid spatial correlation between the host image and the watermarking information. The effects of using one-level and two-level of DWT on the robustness and image quality have been studied. The Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index Measure (SSIM) and Normalized Correlation Coefficient (NCC) are used to evaluate the fidelity of the images. Several grey and still colour images are used to test the new robust algorithms. The new algorithms offered better results in the robustness against different attacks such as JPEG compression, scaling, salt and pepper noise, Gaussian noise, filters and other image processing compared to DCT based algorithms. The authenticity of the images were assessed by using a fragile watermarking algorithm by using hash function (MD5) as watermarking information embedded in the spatial domain. The new algorithm showed high sensitivity against any tampering on the watermarked images. The combined fragile and robust watermarking caused minimal distortion to the images. The combined scheme achieved both the copyright protection and content authentication.
APA, Harvard, Vancouver, ISO, and other styles
6

Donovan, Rory Larson. "Hand (Motor) Movement Imagery Classification of EEG Using Takagi-Sugeno-Kang Fuzzy-Inference Neural Network." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1754.

Full text
Abstract:
Approximately 20 million people in the United States suffer from irreversible nerve damage and would benefit from a neuroprosthetic device modulated by a Brain-Computer Interface (BCI). These devices restore independence by replacing peripheral nervous system functions such as peripheral control. Although there are currently devices under investigation, contemporary methods fail to offer adaptability and proper signal recognition for output devices. Human anatomical differences prevent the use of a fixed model system from providing consistent classification performance among various subjects. Furthermore, notoriously noisy signals such as Electroencephalography (EEG) require complex measures for signal detection. Therefore, there remains a tremendous need to explore and improve new algorithms. This report investigates a signal-processing model that is better suited for BCI applications because it incorporates machine learning and fuzzy logic. Whereas traditional machine learning techniques utilize precise functions to map the input into the feature space, fuzzy-neuro system apply imprecise membership functions to account for uncertainty and can be updated via supervised learning. Thus, this method is better equipped to tolerate uncertainty and improve performance over time. Moreover, a variation of this algorithm used in this study has a higher convergence speed. The proposed two-stage signal-processing model consists of feature extraction and feature translation, with an emphasis on the latter. The feature extraction phase includes Blind Source Separation (BSS) and the Discrete Wavelet Transform (DWT), and the feature translation stage includes the Takagi-Sugeno-Kang Fuzzy-Neural Network (TSKFNN). Performance of the proposed model corresponds to an average classification accuracy of 79.4 % for 40 subjects, which is higher than the standard literature values, 75%, making this a superior model.
APA, Harvard, Vancouver, ISO, and other styles
7

Koglin, Ryan W. "Efficient Image Processing Techniques for Enhanced Visualization of Brain Tumor Margins." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1415835138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kang, Pengju. "On-line condition assessment of power transformer on-load tap-changers : transient vibration analysis using wavelet transform and self organising map." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chintala, Bala Venkata Sai Sundeep. "Objective Perceptual Quality Assessment of JPEG2000 Image Coding Format Over Wireless Channel." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17785.

Full text
Abstract:
A dominant source of Internet traffic, today, is constituted of compressed images. In modern multimedia communications, image compression plays an important role. Some of the image compression standards set by the Joint Photographic Expert Group (JPEG) include JPEG and JPEG2000. The expert group came up with the JPEG image compression standard so that still pictures could be compressed to be sent over an e-mail, be displayed on a webpage, and make high-resolution digital photography possible. This standard was originally based on a mathematical method, used to convert a sequence of data to the frequency domain, called the Discrete Cosine Transform (DCT). In the year 2000, however, a new standard was proposed by the expert group which came to be known as JPEG2000. The difference between the two is that the latter is capable of providing better compression efficiency. There is also a downside to this new format introduced. The computation required for achieving the same sort of compression efficiency as one would get with the original JPEG format is higher. JPEG is a lossy compression standard which can throw away some less important information without causing any noticeable perception differences. Whereas, in lossless compression, the primary purpose is to reduce the number of bits required to represent the original image samples without any loss of information. The areas of application of the JPEG image compression standard include the Internet, digital cameras, printing, and scanning peripherals. In this thesis work, a simulator kind of functionality setup is needed for conducting the objective quality assessment. An image is given as an input to our wireless communication system and its data size is varied (e.g. 5%, 10%, 15%, etc) and a Signal-to-Noise Ratio (SNR) value is given as input, for JPEG2000 compression. Then, this compressed image is passed through a JPEG encoder and then transmitted over a Rayleigh fading channel. The corresponding image obtained after having applied these constraints on the original image is then decoded at the receiver and inverse discrete wavelet transform (IDWT) is applied to inverse the JPEG 2000 compression. Quantization is done for the coefficients which are scalar-quantized to reduce the number of bits to represent them, without the loss of quality of the image. Then the final image is displayed on the screen. The original input image is co-passed with the images of varying data size for an SNR value at the receiver after decoding. In particular, objective perceptual quality assessment through Structural Similarity (SSIM) index using MATLAB is provided.
APA, Harvard, Vancouver, ISO, and other styles
10

Čáp, Martin. "Sledování trendů elektrické aktivity srdce časově-frekvenčním rozkladem." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218005.

Full text
Abstract:
Work is aimed at the time-frequency decomposition of a signal application for monitoring the EKG trend progression. Goal is to create algorithm which would watch changes in the ST segment in EKG recording and its realization in the Matlab program. Analyzed is substance of the origin of EKG and its measuring. For trend calculations after reading the signal is necessary to preprocess the signal, it consists of filtration and detection of necessary points of EKG signal. For taking apart, also filtration and measuring the signal is used wavelet transformation. Source of the data is biomedicine database Physionet. As an outcome of the algorithm are drawn ST segment trends for three recordings from three different patients and its comparison with reference method of ST qualification. For qualification of the heart stability, as a system, where designed methods watching differences in position of the maximal value in two-zone spectrum and the Poincare mapping method. Realized method is attached to this thesis.
APA, Harvard, Vancouver, ISO, and other styles
11

Holdova, Kamila. "Klasifikace spánkových EEG." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-219944.

Full text
Abstract:
This thesis deals with wavelet analysis of sleep electroencephalogram to sleep stages scoring. The theoretical part of the thesis deals with the theory of EEG signal creation and analysis. The polysomnography (PSG) is also described. This is the method for simultaneous measuring the different electrical signals; main of them are electroencephalogram (EEG), electromyogram (EMG) and electrooculogram (EOG). This method is used to diagnose sleep failure. Therefore sleep, sleep stages and sleep disorders are also described in the present study. In practical part, some results of application of discrete wavelet transform (DWT) for decomposing the sleep EEGs using mother wavelet Daubechies 2 „db2“ are shown and the level of the seven. The classification of the resulting data was used feedforward neural network with backpropagation errors.
APA, Harvard, Vancouver, ISO, and other styles
12

Anton, Wirén. "The Discrete Wavelet Transform." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55063.

Full text
Abstract:
In this thesis we will explore the theory behind wavelets. The main focus is on the discrete wavelet transform, although to reach this goal we will also introduce the discrete Fourier transform as it allow us to derive important properties related to wavelet theory, such as the multiresolution analysis. Based on the multiresolution it will be shown how the discrete wavelet transform can be formulated and show how it can be expressed in terms of a matrix. In later chapters we will see how the discrete wavelet transform can be generalized into two dimensions, and discover how it can be used in image processing.
APA, Harvard, Vancouver, ISO, and other styles
13

Pires, Paulo Roberto da Motta. "Processamento Inteligente de Sinais de Press?o e Temperatura Adquiridos Atrav?s de Sensores Permanentes em Po?os de Petr?leo." Universidade Federal do Rio Grande do Norte, 2012. http://repositorio.ufrn.br:8080/jspui/handle/123456789/12970.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:08:50Z (GMT). No. of bitstreams: 1 PauloRMP_capa_ate_pag32.pdf: 5057325 bytes, checksum: bf8da0b02ad06ee116c93344fb67e976 (MD5) Previous issue date: 2012-02-06
Originally aimed at operational objectives, the continuous measurement of well bottomhole pressure and temperature, recorded by permanent downhole gauges (PDG), finds vast applicability in reservoir management. It contributes for the monitoring of well performance and makes it possible to estimate reservoir parameters on the long term. However, notwithstanding its unquestionable value, data from PDG is characterized by a large noise content. Moreover, the presence of outliers within valid signal measurements seems to be a major problem as well. In this work, the initial treatment of PDG signals is addressed, based on curve smoothing, self-organizing maps and the discrete wavelet transform. Additionally, a system based on the coupling of fuzzy clustering with feed-forward neural networks is proposed for transient detection. The obtained results were considered quite satisfactory for offshore wells and matched real requisites for utilization
Originalmente voltadas ao monitoramento da opera??o, as medi??es cont?nuas de press?o e temperatura no fundo de po?o, realizadas atrav?s de PDGs (do ingl?s, Permanent Downhole Gauges), encontram vasta aplicabilidade no gerenciamento de reservat?rios. Para tanto, permitem o monitoramento do desempenho de po?os e a estimativa de par?metros de reservat?rios no longo prazo. Contudo, a despeito de sua inquestion?vel utilidade, os dados adquiridos de PDG apresentam grande conte?do de ru?do. Outro aspecto igualmente desfavor?vel reside na ocorr?ncia de valores esp?rios (outliers) imersos entre as medidas registradas pelo PDG. O presente trabalho aborda o tratamento inicial de sinais de press?o e temperatura, mediante t?cnicas de suaviza??o, mapas auto-organiz?veis e transformada wavelet discreta. Ademais, prop?e-se um sistema de detec??o de transientes relevantes para an?lise no longo hist?rico de registros, baseado no acoplamento entre clusteriza??o fuzzy e redes neurais feed-forward. Os resultados alcan?ados mostraram-se de todo satisfat?rios para po?os marinhos, atendendo a requisitos reais de utiliza??o dos sinais registrados por PDGs
APA, Harvard, Vancouver, ISO, and other styles
14

Dvořák, Martin. "Výukový video kodek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219882.

Full text
Abstract:
The first goal of diploma thesis is to study the basic principles of video signal compression. Introduction to techniques used to reduce irrelevancy and redundancy in the video signal. The second goal is, on the basis of information about compression tools, implement the individual compression tools in the programming environment of Matlab and assemble simple model of the video codec. Diploma thesis contains a description of the three basic blocks, namely - interframe coding, intraframe coding and coding with variable length word - according the standard MPEG-2.
APA, Harvard, Vancouver, ISO, and other styles
15

Grzeszczak, Aleksander. "VLSI architecture for Discrete Wavelet Transform." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9908.

Full text
Abstract:
In this thesis, we present a new simple and efficient VLSI architecture (DWT-SA) for computing the Discrete Wavelet Transform. The proposed architecture is systolic in nature, modular and extendible to 1-D or 2-D DWT transform of any size. The DWT-SA has been designed, simulated and implemented in silicon. The following are the features of the DWT-SA architecture: (1) It has an efficient (close to 100%) hardware utilization. (2) It works with data streams of arbitrary size. (3) The design is cascadable, for computation of one, two or three dimensional DWT. (4) It requires a minimum interface circuitry on the chip for purposes of interconnecting to a standard communication bus. The DWT-SA design has been implemented using CMOS 1.2 um technology.
APA, Harvard, Vancouver, ISO, and other styles
16

Janáková, Jaroslava. "Odhad dechové frekvence z elektrokardiogramu a fotopletysmogramu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442594.

Full text
Abstract:
The master thesis deals with the issue of gaining the respiratory rate from ECG and PPG signals, which are not only in clinical practice widely used measurable signals. The theoretical part of the work outlines the issue of obtaining a breath curve from these signals. The practical part of the work is focused on the implementation of five selected methods and their final evaluation and comparison.
APA, Harvard, Vancouver, ISO, and other styles
17

Urbánek, Pavel. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236385.

Full text
Abstract:
This thesis is focused on subject of image compression using wavelet transform. The first part of this document provides reader with information about image compression, presents well known contemporary algorithms and looks into details of wavelet compression and following encoding schemes. Both JPEG and JPEG 2000 standards are introduced. Second part of this document analyzes and describes implementation of image compression tool including inovations and optimalizations. The third part is dedicated to comparison and evaluation of achievements.
APA, Harvard, Vancouver, ISO, and other styles
18

Sari, Huseyin. "Motion Estimation Using Complex Discrete Wavelet Transform." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1223205/index.pdf.

Full text
Abstract:
The estimation of optical flow has become a vital research field in image sequence analysis especially in past two decades, which found applications in many fields such as stereo optics, video compression, robotics and computer vision. In this thesis, the complex wavelet based algorithm for the estimation of optical flow developed by Magarey and Kingsbury is implemented and investigated. The algorithm is based on a complex version of the discrete wavelet transform (CDWT), which analyzes an image through blocks of filtering with a set of Gabor-like kernels with different scales and orientations. The output is a hierarchy of scaled and subsampled orientation-tuned subimages. The motion estimation algorithm is based on the relationship between translations in image domain and phase shifts in CDWT domain, which is satisfied by the shiftability and interpolability property of CDWT. Optical flow is estimated by using this relationship at each scale, in a coarse-to-fine (hierarchical) manner, where information from finer scales is used to refine the estimates from coarser scales. The performance of the motion estimation algorithm is investigated with various image sequences as input and the effects of the options in the algorithm like curvature-correction, interpolation kernel between levels and some parameter values like confidence threshold iv maximum number of CDWT levels and minimum finest level of detail are also experimented and discussed. The test results show that the method is superior to other well-known algorithms in estimation accuracy, especially under high illuminance variations and additive noise.
APA, Harvard, Vancouver, ISO, and other styles
19

Logue, James K. "The discrete, orthogonal wavelet transform, a projective approach." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA304330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chan, Kenny Lee-Lung. "Finite wordlength effects in discrete time wavelet transform." Thesis, University of Ottawa (Canada), 1999. http://hdl.handle.net/10393/8698.

Full text
Abstract:
This thesis investigates the finite wordlength effects in one-dimensional discrete-time wavelet transform (DTWT). A MATLAB DTWT model is written based on an efficient algorithm suggested by the Multiresolution Analysis (MRA). This model allows users to investigate the overall non-ideal effects of a MRA-based DTWT system by specifying round-off method for computation, the number of quantization bits employed in representing filter coefficient and internal computational results. Further, the possibility of discarding the high frequency portion of a signal in DTWT signal reconstruction phase (Selective Subband Reconstruction) is shown practical with the presence of finite wordlength effects. The feasibility of using either direct structure filter or lattice structure filter to realize a MRA-based DTWT system is also considered in this thesis. Simulation results show that the lattice structure filters have superior magnitude and phase responses in most scenarios under investigation to its direct structure-based counterparts. Moreover, lattice structure filter demands relatively less computation than direct structure filter in the same filter length and DTWT configuration. In conclusion, the lattice structure filter is a better alternative than the direct structure filter for MRA-based DTWT system implementation.
APA, Harvard, Vancouver, ISO, and other styles
21

Chan, Kenny. "Finite wordlength effects in discrete time wavelet transform." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq36674.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yilmaz, Sener. "Generalized Area Tracking Using Complex Discrete Wavelet Transform: The Complex Wavelet Tracker." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608643/index.pdf.

Full text
Abstract:
In this work, a new method is proposed that can be used for area tracking. This method is based on the Complex Discrete Wavelet Transform (CDWT) developed by Magarey and Kingsbury. The CDWT has its advantages over the traditional Discrete Wavelet Transform such as approximate shift invariance, improved directional selectivity, and robustness to noise and illumination changes. The proposed method is a generalization of the CDWT based motion estimation method developed by Magarey and Kingsbury. The Complex Wavelet Tracker extends the original method to estimate the true motion of regions according to a parametric motion model. In this way, rotation, scaling, and shear type of motions can be handled in addition to pure translation. Simulations have been performed on the proposed method including both quantitative and qualitative tests. Quantitative tests are performed on synthetically created test sequences and results have been compared to true data. The performance is compared with intensity-based methods. Qualitative tests are performed on real sequences and evaluations are presented empirically. The results are compared with intensity-based methods. It is observed that the proposed method is very accurate in handling affine deformations for long term sequences and is robust to different target signatures and illumination changes. The accuracy of the proposed method is compatible with intensity-based methods. In addition to this, it can handle a wider range of cases and is robust to illuminaton changes compared to intensity-based methods. The method can be implemented in real-time and could be a powerful replacement of current area trackers.
APA, Harvard, Vancouver, ISO, and other styles
23

Huluta, Emanuil. "Discrete wavelet transform architecture for image coding and decoding." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/26491.

Full text
Abstract:
This thesis analyses and implements a Discrete Wavelet Transform (DWT) architecture for image processing. The architecture comprises two modules, one for image coding and the other for image decoding. Each module is implemented using a novel Modified Forward-Backward Register Allocation (MFBRA) method and accommodates two Fast Processing Elements (FPE). The resulting architecture minimizes the hardware required to perform the task together with reduced processing time, rendering the whole structure suitable for real time applications. The whole architecture is implemented and simulated using the Verilog Hardware Description Language.
APA, Harvard, Vancouver, ISO, and other styles
24

Collins, John Patrick. "Boundary effects from quantizing images with the discrete wavelet transform." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0006/MQ43153.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

McCanny, P. "Generic silicon architectures for the two-dimensional discrete wavelet transform." Thesis, Queen's University Belfast, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ibraheem, Mohammed Shaaban. "Logarithmic Discrete Wavelet Transform For High Quality Medical Image Compression." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066067/document.

Full text
Abstract:
De nos jours, la compression de l'image médicale est un processus essentiel dans les systèmes de cybersanté. Compresser des images médicales de haute qualité est une exigence vitale pour éviter de mal diagnostiquer les examens médicaux par les radiologues. WAAVES est un algorithme de compression d'images médicales prometteur basé sur la transformée en ondelettes discrètes (DWT) qui permet d'obtenir une performance de compression élevée par rapport à l'état de la technique. Les principaux objectifs de ce travail sont d'améliorer la qualité d'image lors de la compression à l'aide de WAAVES et de fournir une architecture DWT haute vitesse pour la compression d'image sur des systèmes embarqués. En ce qui concerne l'amélioration de la qualité, les systèmes de nombres logarithmiques (LNS) ont été explorés pour être utilisés comme une alternative à l'arithmétique linéaire dans les calculs de DWT. Une nouvelle bibliothèque LNS a été développée et validée pour réaliser le DWT logarithmique. En outre, une nouvelle méthode de quantification appelée (LNS-Q) basée sur l'arithmétique logarithmique a été proposée. Un nouveau schéma de compression (LNS-WAAVES) basé sur l'intégration de la méthode Hybrid-DWT et de la méthode LNS-Q avec WAAVES a été développé. Hybrid-DWT combine les avantages des domaines logarithmique et linéaire conduisant à l'amélioration de la qualité d'image et du taux de compression. Les résultats montrent que LNS-WAAVES est capable d'obtenir une amélioration de la qualité d'un pourcentage de 8% et de 34% par rapport aux WAAVES en fonction des paramètres de configuration de compression et des modalités d'image. Pour la compression sur les systèmes embarqués, le défi majeur consistait à concevoir une architecture 2D DWT qui permet d'obtenir un débit de 100 trames full HD. Une nouvelle architecture unifiée de calcul 2D DWT a été proposée. Cette nouvelle architecture effectue à la fois des transformations horizontale et verticale simultanément et élimine le problème des accès de pixel d'image en colonne à partir de la RAM DDR hors-puce. Tous ces facteurs ont conduit à une réduction de la largeur de bande DDR RAM requise de plus de 2X. Le concept proposé utilise des tampons de ligne à 4 ports conduisant à quatre opérations en parallèle pipeline: la DWT verticale, la transformée DWT horizontale et les opérations de lecture / écriture vers la mémoire externe. L'architecture proposée a seulement 1/8 de cycles par pixel (CPP) lui permettant de traiter plus de 100fps Full HD et est considérée comme une solution prometteuse pour le futur traitement vidéo 4K et 8K. Enfin, l'architecture développée est hautement évolutive, surperforme l'état de l'art des travaux connexes existants, et est actuellement déployé dans un prototype médical EEG vidéo
Nowadays, medical image compression is an essential process in eHealth systems. Compressing medical images in high quality is a vital demand to avoid misdiagnosing medical exams by radiologists. WAAVES is a promising medical images compression algorithm based on the discrete wavelet transform (DWT) that achieves a high compression performance compared to the state of the art. The main aims of this work are to enhance image quality when compressing using WAAVES and to provide a high-speed DWT architecture for image compression on embedded systems. Regarding the quality improvement, the logarithmic number systems (LNS) was explored to be used as an alternative to the linear arithmetic in DWT computations. A new LNS library was developed and validated to realize the logarithmic DWT. In addition, a new quantization method called (LNS-Q) based on logarithmic arithmetic was proposed. A novel compression scheme (LNS-WAAVES) based on integrating the Hybrid-DWT and the LNS-Q method with WAAVES was developed. Hybrid-DWT combines the advantages of both the logarithmic and the linear domains leading to enhancement of the image quality and the compression ratio. The results showed that LNS-WAAVES is able to achieve an improvement in the quality by a percentage of 8% and up to 34% compared to WAAVES depending on the compression configuration parameters and the image modalities. For compression on embedded systems, the major challenge was to design a 2D DWT architecture that achieves a throughput of 100 full HD frame/s. A novel unified 2D DWT computation architecture was proposed. This new architecture performs both horizontal and vertical transform simultaneously and eliminates the problem of column-wise image pixel accesses to/from the off-chip DDR RAM. All of these factors have led to a reduction of the required off-chip DDR RAM bandwidth by more than 2X. The proposed concept uses 4-port line buffers leading to pipelined parallel four operations: the vertical DWT, the horizontal DWT transform, and the read/write operations to the external memory. The proposed architecture has only 1/8 cycles per pixel (CPP) enabling it to process more than 100fps Full HD and it is considered a promising solution for future 4K and 8K video processing. Finally, the developed architecture is highly scalable, outperforms the state of the art existing related work, and currently is deployed in a video EEG medical prototype
APA, Harvard, Vancouver, ISO, and other styles
27

Chang, Jin. "SINGLE ENDED TRAVELING WAVE BASED FAULT LOCATION USING DISCRETE WAVELET TRANSFORM." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/58.

Full text
Abstract:
In power transmission systems, locating faults is an essential technology. When a fault occurs on a transmission line, it will affect the whole power system. To find the fault location accurately and promptly is required to ensure the power supply. In this paper, the study of traveling wave theory, fault location method, Karrenbauer transform, and Wavelet transform is presented. This thesis focuses on single ended fault location method. The signal processing technique and evaluation study are presented. The MATLAB SimPowerSystem is used to test and simulate fault scenarios for evaluation studies.
APA, Harvard, Vancouver, ISO, and other styles
28

Tan, Kay-Chuan Benny. "Low power JPEG2000 5/3 discrete wavelet transform algorithm and architecture." Thesis, University of Edinburgh, 2004. http://hdl.handle.net/1842/14517.

Full text
Abstract:
With the advance in VLSI digital technology, many high throughput and performance imaging and video application had emerged and increased in usage. At the core of these imaging and video applications is the image and video compression technology. Image and video compression processes are by nature very computational and power consuming. Such high power consumption will shorten the operating time of a portable imaging and video device and can also cause overheating. As such, ways of making image and video compression processes inherently low power is needed. The lifting based Discrete Wavelet Transform (DWT) is increasingly used for compression digital image data and is the basis for the JPEG2000 standard (ISO/IEC 15444). Even though the lifting based DWT had aroused considerable implementation of this algorithm, there is no work on the low power realisation of such algorithm. Recent JPEG20O0 DWT implementations are pipelined data-path centric designs and do not consider the issue of power. This thesis therefore sets out to realise a low power JPEG2000 5/3 lifting based DWT hardware architecture and investigates whether optimising at both algorithmic and architectural level will yield a lower power hardware. Besides these, this research also ascertain whether the accumulating Arithmetic Logic Unit (ALU) centric processor architecture is more low power than the feed-through pipelined data-path centric processor architecture. A number of novel implementation schemes of the realisation of a low power JPEG2000 5/3 lifting based DWT hardware are proposed and presented in this thesis. These schemes target to reduce the switched capacitance by reducing the number of computational steps and data-path/arithmetic hardware through the manipulation of the lifting-based 5/3 DWT algorithm, operation scheduling and alteration to the traditional processor architecture. These resulted in a novel SA-ALU centric JPEG2000 5/3 lifting based DWT hardware architecture that saves about 25% of hardware with respect to the two presented existing 5/3 DWT lifting-based architecture.
APA, Harvard, Vancouver, ISO, and other styles
29

Anoh, Kelvin O. O., N. T. Ali, Hassan S. O. Migdadi, Raed A. Abd-Alhameed, Tahereh S. Ghazaany, Steven M. R. Jones, James M. Noras, and Peter S. Excell. "Performance comparison of MIMO-DWT and MIMO-FrFT multicarrier systems." 2013. http://hdl.handle.net/10454/9607.

Full text
Abstract:
No
In this work, we discuss two new multicarrier modulating kernels that can be adopted for multicarrier signaling. These multicarrier transforms are the fractional Forurier transform (FrFT) and discrete wavelet transforms (DWT). At first, we relate the transforms in terms of mathematical relationships, and then using numerical and simulation comparisons we show their performances in terms of bit error ratio (BER) for Multiple Input Multiple Output (MIMO) applications. Numerical results using BPSK and QPSK support that both can be applied for multicarrier signaling, however, it can be resource effective to drive the DWT as the baseband multicarrier kernel at the expense of the FrFT
APA, Harvard, Vancouver, ISO, and other styles
30

Asif, Rameez, Tahereh S. Ghazaany, Raed A. Abd-Alhameed, James M. Noras, Steven M. R. Jones, Jonathan Rodriguez, and Chan H. See. "MIMO discrete wavelet transform for the next generation wireless systems." 2013. http://hdl.handle.net/10454/9617.

Full text
Abstract:
No
Study is presented into the performance of Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) and MIMO-DWT with transmit beamforming. Feedback loop has been used between the equalizer at the transmitter to the receiver which provided the channel state information which was then used to construct a steering matrix for the transmission sequence such that the received signals at the transmitter can be combined constructively in order to provide a reliable and improved system for next generation wireless systems. As convolution in time domain equals multiplication in frequency domain no such counterpart exist for the symbols in space, means linear convolution and Intersymbol Interference (ISI) generation so both zero forcing (ZF) and minimum mean squared error (MMSE) equalizations have been employed. The results show superior performance improvement and in addition allow keeping the processing, power and implementation cost at the transmitter which has less constraints and the results also show that both equalization algorithms perform alike in wavelets and the ISI is spread equally between different wavelet domains.
APA, Harvard, Vancouver, ISO, and other styles
31

Mehrotra, Abhishek. "Shape Adaptive Integer Wavelet Transform Based Coding Scheme For 2-D/3-D Brain MR Images." Thesis, 2004. https://etd.iisc.ac.in/handle/2005/1171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mehrotra, Abhishek. "Shape Adaptive Integer Wavelet Transform Based Coding Scheme For 2-D/3-D Brain MR Images." Thesis, 2004. http://etd.iisc.ernet.in/handle/2005/1171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Min-yu, and 黃敏煜. "A Discrete Wavelet Transform (DWT) based De-noising Circuit Design with its Applications to Medical Signal Processing." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/57371780328343232025.

Full text
Abstract:
碩士
長庚大學
電子工程研究所
93
Wavelet Transform is a multiresolution analysis that decomposes an original signal to multi-octave based functions, and we can analysis the original signal using these functions. It provides a novel and effective tool for many applications in signal processing area. Also, it has the advantage over the traditional Fourier Transform with respect to time-frequency analysis because of its characteristic of multiresolution. Therefore, it has been widely applied into many aspects of signal/image processing-related researches. In this thesis, we proposed and realized a Discrete Wavelet Transform (DWT) based de-noising circuit architecture with the applications into the noise reduction for medical signals. Here, our design was based on a three octave-level with Daubechies 4 filters. The circuit consists of three parts: DWT, thresholding, and IDWT. Software and hardware simulations were performed first. Furthermore, we implemented the de-noising circuit by downloading the Verilog codes to an FPGA to observe its practical processing ability. As a result, by inputting a noisy Electrocardiogram (ECG) into the de-noising circuit we found that the circuit satisfied the requirement of real-time processing, and also achieved pretty good performance for noise reduction.
APA, Harvard, Vancouver, ISO, and other styles
34

Asif, Rameez, Raed A. Abd-Alhameed, and James M. Noras. "A Unique Wavelet-based Multicarrier System with and without MIMO over Multipath Channels with AWGN." 2015. http://hdl.handle.net/10454/7889.

Full text
Abstract:
yes
Recent studies suggest that multicarrier systems using wavelets outperform conventional OFDM systems using the FFT, in that they have well-contained side lobes, improved spectral efficiency and BER performance, and they do not require a cyclic prefix. Here we study the wavelet packet and discrete wavelet transforms, comparing the BER performance of wavelet transform-based multicarrier systems and Fourier based OFDM systems, for multipath Rayleigh channels with AWGN. In the proposed system zero-forcing channel estimation in the frequency domain has been used. Results confirm that discrete wavelet-based systems using Daubechies wavelets outperform both wavelet packet transform- based systems and FFT-OFDM systems in terms of BER. Finally, Alamouti coding and maximal ratio combining schemes were employed in MIMO environments, where results show that the effects of multipath fading were greatly reduced by the antenna diversity.
APA, Harvard, Vancouver, ISO, and other styles
35

Vaidya, Anil Pralhad. "A Model Study For The Application Of Wavelet And Neural Network For Identification And Localization Of Partial Discharges In Transformers." Thesis, 2004. https://etd.iisc.ac.in/handle/2005/1183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Vaidya, Anil Pralhad. "A Model Study For The Application Of Wavelet And Neural Network For Identification And Localization Of Partial Discharges In Transformers." Thesis, 2004. http://etd.iisc.ernet.in/handle/2005/1183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Alasta, Amro F., Abdulrazag Algamudi, Rami S. R. Qahwaji, Stanley S. Ipson, A. Hauchecorne, and M. Meftah. "New method of Enhancement using Wavelet Transforms applied to SODISM Telescope." 2018. http://hdl.handle.net/10454/16611.

Full text
Abstract:
yes
PICARD is a space-based observatory hosting the Solar Diameter Imager and Surface Mapper (SODISM) telescope, which has continuously observed the Sun from July 2010 and up to March 2014. In order to study the fine structure of the solar surface, it is helpful to apply techniques that enhance the images so as to improve the visibility of solar features such as sunspots or faculae. The objective of this work is to develop an innovative technique to enhance the quality of the SODISM images in the five wavelengths monitored by the telescope at 215.0 nm, 393.37 nm, 535.7 nm, 607.1 nm and 782.2 nm. An enhancement technique using interpolation of the high-frequency sub-bands obtained by Discrete Wavelet Transforms (DWT) and the input image is applied to the SODISM images. The input images are decomposed by the DWT as well as Stationary Wavelet Transform (SWT) into four separate sub-bands in horizontal and vertical directions namely, low-low (LL), low-high (LH), high-low (HL) and high–high (HH) frequencies. The DWT high frequency sub-bands are interpolated by a factor 2. The estimated high frequency sub-bands (edges) are enhanced by introducing an intermediate stage using a stationary Wavelet Transform (SWT), and then all these sub-bands and input image are combined and interpolated with half of the interpolation factor α/2, used to interpolate the high-frequency sub-bands, in order to reach the required size for IDWT processing. Quantitative and visual results show the superiority of the proposed technique over a bicubic image resolution enhancement technique. In addition, filling factors for sunspots are calculated from SODISM images and results are presented in this work.
APA, Harvard, Vancouver, ISO, and other styles
38

Gupta, Pradeep Kumar. "Denoising And Inpainting Of Images : A Transform Domain Based Approach." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/515.

Full text
Abstract:
Many scientific data sets are contaminated by noise, either because of data acquisition process, or because of naturally occurring phenomena. A first step in analyzing such data sets is denoising, i.e., removing additive noise from a noisy image. For images, noise suppression is a delicate and a difficult task. A trade of between noise reduction and the preservation of actual image features has to be made in a way that enhances the relevant image content. The beginning chapter in this thesis is introductory in nature and discusses the Popular denoising techniques in spatial and frequency domains. Wavelet transform has wide applications in image processing especially in denoising of images. Wavelet systems are a set of building blocks that represent a signal in an expansion set involving indices for time and scale. These systems allow the multi-resolution representation of signals. Several well known denoising algorithms exist in wavelet domain which penalize the noisy coefficients by threshold them. We discuss the wavelet transform based denoising of images using bit planes. This approach preserves the edges in an image. The proposed approach relies on the fact that wavelet transform allows the denoising strategy to adapt itself according to directional features of coefficients in respective sub-bands. Further, issues related to low complexity implementation of this algorithm are discussed. The proposed approach has been tested on different sets images under different noise intensities. Studies have shown that this approach provides a significant reduction in normalized mean square error (NMSE). The denoised images are visually pleasing. Many of the image compression techniques still use the redundancy reduction property of the discrete cosine transform (DCT). So, the development of a denoising algorithm in DCT domain has a practical significance. In chapter 3, a DCT based denoising algorithm is presented. In general, the design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated approach to design filters based on DCT is proposed in chapter 3. This algorithm reorganizes DCT coefficients in a wavelet transform manner to get the better energy clustering at desired spatial locations. An adaptive threshold is chosen because such adaptively can improve the wavelet threshold performance as it allows additional local information of the image to be incorporated in the algorithm. Evaluation results show that the proposed filter is robust under various noise distributions and does not require any a-priori Knowledge about the image. Inpainting is another application that comes under the category of image processing. In painting provides a way for reconstruction of small damaged portions of an image. Filling-in missing data in digital images has a number of applications such as, image coding and wireless image transmission for recovering lost blocks, special effects (e.g., removal of objects) and image restoration (e.g., removal of solid lines, scratches and noise removal). In chapter 4, a wavelet based in painting algorithm is presented for reconstruction of small missing and damaged portion of an image while preserving the overall image quality. This approach exploits the directional features that exist in wavelet coefficients in respective sub-bands. The concluding chapter presents a brief review of the three new approaches: wavelet and DCT based denoising schemes and wavelet based inpainting method.
APA, Harvard, Vancouver, ISO, and other styles
39

Gupta, Pradeep Kumar. "Denoising And Inpainting Of Images : A Transform Domain Based Approach." Thesis, 2007. http://hdl.handle.net/2005/515.

Full text
Abstract:
Many scientific data sets are contaminated by noise, either because of data acquisition process, or because of naturally occurring phenomena. A first step in analyzing such data sets is denoising, i.e., removing additive noise from a noisy image. For images, noise suppression is a delicate and a difficult task. A trade of between noise reduction and the preservation of actual image features has to be made in a way that enhances the relevant image content. The beginning chapter in this thesis is introductory in nature and discusses the Popular denoising techniques in spatial and frequency domains. Wavelet transform has wide applications in image processing especially in denoising of images. Wavelet systems are a set of building blocks that represent a signal in an expansion set involving indices for time and scale. These systems allow the multi-resolution representation of signals. Several well known denoising algorithms exist in wavelet domain which penalize the noisy coefficients by threshold them. We discuss the wavelet transform based denoising of images using bit planes. This approach preserves the edges in an image. The proposed approach relies on the fact that wavelet transform allows the denoising strategy to adapt itself according to directional features of coefficients in respective sub-bands. Further, issues related to low complexity implementation of this algorithm are discussed. The proposed approach has been tested on different sets images under different noise intensities. Studies have shown that this approach provides a significant reduction in normalized mean square error (NMSE). The denoised images are visually pleasing. Many of the image compression techniques still use the redundancy reduction property of the discrete cosine transform (DCT). So, the development of a denoising algorithm in DCT domain has a practical significance. In chapter 3, a DCT based denoising algorithm is presented. In general, the design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated approach to design filters based on DCT is proposed in chapter 3. This algorithm reorganizes DCT coefficients in a wavelet transform manner to get the better energy clustering at desired spatial locations. An adaptive threshold is chosen because such adaptively can improve the wavelet threshold performance as it allows additional local information of the image to be incorporated in the algorithm. Evaluation results show that the proposed filter is robust under various noise distributions and does not require any a-priori Knowledge about the image. Inpainting is another application that comes under the category of image processing. In painting provides a way for reconstruction of small damaged portions of an image. Filling-in missing data in digital images has a number of applications such as, image coding and wireless image transmission for recovering lost blocks, special effects (e.g., removal of objects) and image restoration (e.g., removal of solid lines, scratches and noise removal). In chapter 4, a wavelet based in painting algorithm is presented for reconstruction of small missing and damaged portion of an image while preserving the overall image quality. This approach exploits the directional features that exist in wavelet coefficients in respective sub-bands. The concluding chapter presents a brief review of the three new approaches: wavelet and DCT based denoising schemes and wavelet based inpainting method.
APA, Harvard, Vancouver, ISO, and other styles
40

Lanka, Karthikeyan. "Predictability of Nonstationary Time Series using Wavelet and Empirical Mode Decomposition Based ARMA Models." Thesis, 2013. http://etd.iisc.ac.in/handle/2005/3363.

Full text
Abstract:
The idea of time series forecasting techniques is that the past has certain information about future. So, the question of how the information is encoded in the past can be interpreted and later used to extrapolate events of future constitute the crux of time series analysis and forecasting. Several methods such as qualitative techniques (e.g., Delphi method), causal techniques (e.g., least squares regression), quantitative techniques (e.g., smoothing method, time series models) have been developed in the past in which the concept lies in establishing a model either theoretically or mathematically from past observations and estimate future from it. Of all the models, time series methods such as autoregressive moving average (ARMA) process have gained popularity because of their simplicity in implementation and accuracy in obtaining forecasts. But, these models were formulated based on certain properties that a time series is assumed to possess. Classical decomposition techniques were developed to supplement the requirements of time series models. These methods try to define a time series in terms of simple patterns called trend, cyclical and seasonal patterns along with noise. So, the idea of decomposing a time series into component patterns, later modeling each component using forecasting processes and finally combining the component forecasts to obtain actual time series predictions yielded superior performance over standard forecasting techniques. All these methods involve basic principle of moving average computation. But, the developed classical decomposition methods are disadvantageous in terms of containing fixed number of components for any time series, data independent decompositions. During moving average computation, edges of time series might not get modeled properly which affects long range forecasting. So, these issues are to be addressed by more efficient and advanced decomposition techniques such as Wavelets and Empirical Mode Decomposition (EMD). Wavelets and EMD are some of the most innovative concepts considered in time series analysis and are focused on processing nonlinear and nonstationary time series. Hence, this research has been undertaken to ascertain the predictability of nonstationary time series using wavelet and Empirical Mode Decomposition (EMD) based ARMA models. The development of wavelets has been made based on concepts of Fourier analysis and Window Fourier Transform. In accordance with this, initially, the necessity of involving the advent of wavelets has been presented. This is followed by the discussion regarding the advantages that are provided by wavelets. Primarily, the wavelets were defined in the sense of continuous time series. Later, in order to match the real world requirements, wavelets analysis has been defined in discrete scenario which is called as Discrete Wavelet Transform (DWT). The current thesis utilized DWT for performing time series decomposition. The detailed discussion regarding the theory behind time series decomposition is presented in the thesis. This is followed by description regarding mathematical viewpoint of time series decomposition using DWT, which involves decomposition algorithm. EMD also comes under same class as wavelets in the consequence of time series decomposition. EMD is developed out of the fact that most of the time series in nature contain multiple frequencies leading to existence of different scales simultaneously. This method, when compared to standard Fourier analysis and wavelet algorithms, has greater scope of adaptation in processing various nonstationary time series. The method involves decomposing any complicated time series into a very small number of finite empirical modes (IMFs-Intrinsic Mode Functions), where each mode contains information of the original time series. The algorithm of time series decomposition using EMD is presented post conceptual elucidation in the current thesis. Later, the proposed time series forecasting algorithm that couples EMD and ARMA model is presented that even considers the number of time steps ahead of which forecasting needs to be performed. In order to test the methodologies of wavelet and EMD based algorithms for prediction of time series with non stationarity, series of streamflow data from USA and rainfall data from India are used in the study. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability by the proposed algorithm is checked in two scenarios, first being six months ahead forecast and the second being twelve months ahead forecast. Normalized Root Mean Square Error (NRMSE) and Nash Sutcliffe Efficiency Index (Ef) are considered to evaluate the performance of the proposed techniques. Based on the performance measures, the results indicate that wavelet based analyses generate good variations in the case of six months ahead forecast maintaining harmony with the observed values at most of the sites. Although the methods are observed to capture the minima of the time series effectively both in the case of six and twelve months ahead predictions, better forecasts are obtained with wavelet based method over EMD based method in the case of twelve months ahead predictions. It is therefore inferred that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm could be used to model events such as droughts with reasonable accuracy. Also, some modifications that could be made in the model have been suggested which can extend the scope of applicability to other areas in the field of hydrology.
APA, Harvard, Vancouver, ISO, and other styles
41

Lanka, Karthikeyan. "Predictability of Nonstationary Time Series using Wavelet and Empirical Mode Decomposition Based ARMA Models." Thesis, 2013. http://etd.iisc.ernet.in/2005/3363.

Full text
Abstract:
The idea of time series forecasting techniques is that the past has certain information about future. So, the question of how the information is encoded in the past can be interpreted and later used to extrapolate events of future constitute the crux of time series analysis and forecasting. Several methods such as qualitative techniques (e.g., Delphi method), causal techniques (e.g., least squares regression), quantitative techniques (e.g., smoothing method, time series models) have been developed in the past in which the concept lies in establishing a model either theoretically or mathematically from past observations and estimate future from it. Of all the models, time series methods such as autoregressive moving average (ARMA) process have gained popularity because of their simplicity in implementation and accuracy in obtaining forecasts. But, these models were formulated based on certain properties that a time series is assumed to possess. Classical decomposition techniques were developed to supplement the requirements of time series models. These methods try to define a time series in terms of simple patterns called trend, cyclical and seasonal patterns along with noise. So, the idea of decomposing a time series into component patterns, later modeling each component using forecasting processes and finally combining the component forecasts to obtain actual time series predictions yielded superior performance over standard forecasting techniques. All these methods involve basic principle of moving average computation. But, the developed classical decomposition methods are disadvantageous in terms of containing fixed number of components for any time series, data independent decompositions. During moving average computation, edges of time series might not get modeled properly which affects long range forecasting. So, these issues are to be addressed by more efficient and advanced decomposition techniques such as Wavelets and Empirical Mode Decomposition (EMD). Wavelets and EMD are some of the most innovative concepts considered in time series analysis and are focused on processing nonlinear and nonstationary time series. Hence, this research has been undertaken to ascertain the predictability of nonstationary time series using wavelet and Empirical Mode Decomposition (EMD) based ARMA models. The development of wavelets has been made based on concepts of Fourier analysis and Window Fourier Transform. In accordance with this, initially, the necessity of involving the advent of wavelets has been presented. This is followed by the discussion regarding the advantages that are provided by wavelets. Primarily, the wavelets were defined in the sense of continuous time series. Later, in order to match the real world requirements, wavelets analysis has been defined in discrete scenario which is called as Discrete Wavelet Transform (DWT). The current thesis utilized DWT for performing time series decomposition. The detailed discussion regarding the theory behind time series decomposition is presented in the thesis. This is followed by description regarding mathematical viewpoint of time series decomposition using DWT, which involves decomposition algorithm. EMD also comes under same class as wavelets in the consequence of time series decomposition. EMD is developed out of the fact that most of the time series in nature contain multiple frequencies leading to existence of different scales simultaneously. This method, when compared to standard Fourier analysis and wavelet algorithms, has greater scope of adaptation in processing various nonstationary time series. The method involves decomposing any complicated time series into a very small number of finite empirical modes (IMFs-Intrinsic Mode Functions), where each mode contains information of the original time series. The algorithm of time series decomposition using EMD is presented post conceptual elucidation in the current thesis. Later, the proposed time series forecasting algorithm that couples EMD and ARMA model is presented that even considers the number of time steps ahead of which forecasting needs to be performed. In order to test the methodologies of wavelet and EMD based algorithms for prediction of time series with non stationarity, series of streamflow data from USA and rainfall data from India are used in the study. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability by the proposed algorithm is checked in two scenarios, first being six months ahead forecast and the second being twelve months ahead forecast. Normalized Root Mean Square Error (NRMSE) and Nash Sutcliffe Efficiency Index (Ef) are considered to evaluate the performance of the proposed techniques. Based on the performance measures, the results indicate that wavelet based analyses generate good variations in the case of six months ahead forecast maintaining harmony with the observed values at most of the sites. Although the methods are observed to capture the minima of the time series effectively both in the case of six and twelve months ahead predictions, better forecasts are obtained with wavelet based method over EMD based method in the case of twelve months ahead predictions. It is therefore inferred that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm could be used to model events such as droughts with reasonable accuracy. Also, some modifications that could be made in the model have been suggested which can extend the scope of applicability to other areas in the field of hydrology.
APA, Harvard, Vancouver, ISO, and other styles
42

Yang, Min-Ta, and 楊明達. "The VLSI Design of Discrete Wavelet Transform and Inverse Discrete Wavelet Transform for JPEG2000." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/93707017269338386899.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系
91
The discrete wavelet transform (DWT) technique has been widely used in signal processing and image processing. Since the DWT coefficients have the property of energy conservation in the low frequency part, it is suitable for data compression. In the recent industry standard, DWT has replaced DCT which is a main approach in image compression in JPEG2000 and MPEG4. Due to the drawback, the thesis focuses on the hardware architectures of 1-layer 1- dimensional DWT, multi-layer 1-dimensional DWT and Multi-layer 2-dimensional DWT which are the standard of JPEG2000. Based on the architecture, DWT and the inverse discrete wavelet transform (IDWT) can be integrated easily. Furthermore we can use the module design concept to implement DWT by different hardware architectures. Moreover, in the thesis, we use two different architectures to implement the multi-layer 1-dimensional DWT. Besides, accompanied with “2D-controller”the multi-layer 1-dimensional DWT can be adopted in multi-layer 2-dimension DWT hardware architecture. Consequently, the gate level simulation and P&R are done with Synopsys and Avant! tools. The chip has advantages in the ability of image processing, execution time and chip area.
APA, Harvard, Vancouver, ISO, and other styles
43

Das, Swastik, and Rasmi Ranjan Sethy. "Image Compression using Discrete Cosine Transform & Discrete Wavelet Transform." Thesis, 2009. http://ethesis.nitrkl.ac.in/1119/1/Image_Compression_using_DCT_%26_DWT.pdf.

Full text
Abstract:
Image Compression addresses the problem of reducing the amount of data required to represent the digital image. Compression is achieved by the removal of one or more of three basic data redundancies: (1) Coding redundancy, which is present when less than optimal (i.e. the smallest length) code words are used; (2) Interpixel redundancy, which results from correlations between the pixels of an image & (3) psycho visual redundancy which is due to data that is ignored by the human visual system (i.e. visually nonessential information). Huffman codes contain the smallest possible number of code symbols (e.g., bits) per source symbol (e.g., grey level value) subject to the constraint that the source symbols are coded one at a time. So, Huffman coding when combined with technique of reducing the image redundancies using Discrete Cosine Transform (DCT) helps in compressing the image data to a very good extent. The Discrete Cosine Transform (DCT) is an example of transform coding. The current JPEG standard uses the DCT as its basis. The DC relocates the highest energies to the upper left corner of the image. The lesser energy or information is relocated into other areas. The DCT is fast. It can be quickly calculated and is best for images with smooth edges like photos with human subjects. The DCT coefficients are all real numbers unlike the Fourier Transform. The Inverse Discrete Cosine Transform (IDCT) can be used to retrieve the image from its transform representation. The Discrete wavelet transform (DWT) has gained widespread acceptance in signal processing and image compression. Because of their inherent multi-resolution nature, wavelet-coding schemes are especially suitable for applications where scalability and tolerable degradation are important. Recently the JPEG committee has released its new image coding standard, JPEG-2000, which has been based upon DWT.
APA, Harvard, Vancouver, ISO, and other styles
44

Bhawna, Gauatm. "Image compression using discrete cosine transform and discrete wavelet transform." Thesis, 2010. http://ethesis.nitrkl.ac.in/1731/1/project.pdf.

Full text
Abstract:
It is used specially for the compression of images where tolerable degradation is required. With the wide use of computers and consequently need for large scale storage and transmission of data, efficient ways of storing of data have become necessary. With the growth of technology and entrance into the Digital Age ,the world has found itself amid a vast amount of information. Dealing with such enormous information can often present difficulties. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.JPEG and JPEG 2000 are two important techniques used for image compression. JPEG image compression standard use DCT (DISCRETE COSINE TRANSFORM). The discrete cosine transform is a fast transform. It is a widely used and robust method for image compression. It has excellent compaction for highly correlated data.DCT has fixed basis images DCT gives good compromise between information packing ability and computational complexity. JPEG 2000 image compression standard makes use of DWT (DISCRETE WAVELET TRANSFORM). DWT can be used to reduce the image size without losing much of the resolutions computed and values less than a pre-specified threshold are discarded. Thus it reduces the amount of memory required to represent given image.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Jyh-Wei, and 王志雄. "The VLSI Design for Discrete Wavelet Transform and Inverse Discrete Wavelet Transform using Embedded Instruction Codes." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/06910225125080712632.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系
89
The discrete wavelet transform (DWT) technique has been widely used in signal processing and image processing. Since the DWT coefficients have the property of energy conservation in the low frequency part, it is suitable for data compression. In the recent industry standard, DWT has replaced DCT which is a main approach in image compression in JPEG2000 and MPEG4. However, there are difficulties in implementation such as the heavy computation and complex operation procedures. Due to the drawback, the thesis purposes a design rule named “Embedded Instruction Code (EIC)”, as well as focus on the hardware architectures of 1-stage 1 dimension DWT, multi-stage 1 dimension DWT and Multi-stage 2 dimension DWT. Based on this rule, DWT and inverse discrete wavelet transform (IDWT) can be integrated easily. Furthermore we can use the module design concept to implement DWT by different hardware architectures. Using EIC, we can translate the computation of DWT and IDWT into the instruction codes of ALU. Moreover, the ALU architecture of 1-stage DWT is worked out by three different architectures. Besides, accompanied with “Recursive Pyramid Algorithm (RPA)” the EIC can be adopted in multi-stage 1 dimension DWT and multi-stage 2 dimension DWT. Consequently, the gate level simulation and P&R are done with Synopsys and Cadence tools. The chip has excellent advantages in the ability of image processing, execution time and chip area.
APA, Harvard, Vancouver, ISO, and other styles
46

Shen, Nu-Chuan, and 沈汝川. "Sectioned Convoluion for Discrete Wavelet Transform." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/79034618082699960740.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
96
Discrete wavelet transform (DWT) is a very popular mathematical tool. It has been widely applied in engineering, signal processing and image processing, etc. In this thesis, we will introduce the DWT and the application of it and then I will use a method called sectioned convolution that proposed in this thesis to reduce the complexity of the DWT. The sectioned convolution is a fast algorithm of convolution by splitting the input of signal into section by section with sectioned length L, so we do not have to do the convolution until all the signal is received. It not only finds out a way to solve the delay problem but also reduces the computation time and computation complexity very much. The sectioned convolution discrete wavelet transform (SCDWT) is an application of sectioned convolution. It replaces all the traditional convolution computation in the DWT into the sectioned convolution. The efficiency implementation sectioned convolution discrete wavelet transform (EISCDWT) is an efficient way to implement the DWT. Its concept just likes the efficient implementation discrete wavelet transform but we use the sectioned convolution to instead of the traditional convolution. By this replacement, we can reduce the computation complexity and computation time. Beside the advantages that we mention above, there is another advantage that we also reduce the system complexity. Because we split the signal into the same length L, the point of FFT is fixed, the complexity of system is reduced. Recently, there are many research works about the DWT. The DWT has been used for many applications. We believe that the algorithm that we proposed in this thesis can make the DWT more powerful and have a lot of potentiality in the future. In this thesis, I will introduce the research works about the DWT systematically, including the research works of my professor and I and do a detailed comparison to the previous works. In Chap. 1, I will introduce the basic ideas and history of the wavelet transform. In Chap. 2, I will introduce the definition and the computation complexity of the DWT, including the detailed derivation, property. In Chap. 3, I will introduce the applications of the DWT simply. In Chap. 4, I will introduce the EIDWT and compare it to the traditional DWT in computation complexity. In Chap. 5, I will introduce the sectioned convolution and compare it to the traditional convolution on computation time and computation complexity. Considering the fair competition, all the programmings in my thesis are written by myself. In Chap. 9, I will do a detailed analysis of SCDWT and EISCDWT and a comparison between the DWT, SCDWT and EISCDWT. In the end of this chapter, I will compare the JPEG2000 with EISCDWT and JPEG wit DCT. In Chaps. 7, 8, I will introduce other researches of method to improve the efficiency of DWT May this thesis be helpful for you.
APA, Harvard, Vancouver, ISO, and other styles
47

Chiang, Tung-Hsien. "Discrete Wavelet Transform Based Speaker Recognition." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0020-2607200713324300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Shen, Nu-Chuan. "Sectioned Convoluion for Discrete Wavelet Transform." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-0907200818092200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Chiang, Tung-Hsien, and 江宗憲. "Discrete Wavelet Transform Based Speaker Recognition." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/08131235953743495256.

Full text
Abstract:
碩士
國立暨南國際大學
電機工程學系
95
This thesis was proposed under the basis of the wavelet transform of speaker recognition system. In recent years, the relevant research for speaker feature that we usually adopted Mel-frequency cepstral coefficients (MFCC). However, the background of noise may affect the recognition of speakers in real life, such as the voice of movement, engine running, speed change, braking, percussion, and so on. This thesis will introduce the technology of wavelet transform. By the way of wavelet transform, we will disassemble the frame through classified frequency. Besides, we will also calculate the characteristic parameter which with individual frequency band and establish the model of speakers. And we will set up more reliable speaker model. In the experiment of speaker recognition, we adopt Chinese digit (0-9) words form 10 speakers. And everyone speaks 20 times or so. Finally, this thesis will be tested under the condition of variable background noise level. We adopt each person of 200 files or so as reference data, others are used as test data. We find that the wavelet transform of speaker recognition system can be more effective and efficient than the traditional speaker recognition. The recognition rate can improve 2%~5% or so.
APA, Harvard, Vancouver, ISO, and other styles
50

Lewis, James M. "The continuous wavelet transform: A discrete approximation." Thesis, 1998. http://hdl.handle.net/1911/17192.

Full text
Abstract:
In this thesis, we develop an approximation to the continuous wavelet transform (CWT) which is unique in that it does not require an exact scaling relationship between the levels of the transform, but asymptotically approaches an irrational scaling ratio of 2$\sp{1/n{\sb0}}$ where $n\sb0$ is related to the number of vanishing moments of the original scaling filter. The autocorrelation sequences of the scaling and wavelet filters associated with the Daubechies family of orthonormal compactly supported wavelets are shown to converge to smooth symmetric wavelets which approximate the Deslauriers and Dubuc limiting functions. We show why this transform is superior to a conventional dyadic wavelet transform for the edge detection application, and analyze its performance in denoising applications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography