To see the other types of publications on this topic, follow the link: Wavelet processing.

Dissertations / Theses on the topic 'Wavelet processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Wavelet processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

May, Heather. "Wavelet-based Image Processing." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1448037498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shi, Fangmin. "Wavelet transforms for stereo imaging." Thesis, University of South Wales, 2002. https://pure.southwales.ac.uk/en/studentthesis/wavelet-transforms-for-stereo-imaging(65abb68f-e30b-4367-a3a8-b7b3df85f566).html.

Full text
Abstract:
Stereo vision is a means of obtaining three-dimensional information by considering the same scene from two different positions. Stereo correspondence has long been and will continue to be the active research topic in computer vision. The requirement of dense disparity map output is great demand motivated by modern applications of stereo such as three-dimensional high-resolution object reconstruction and view synthesis, which require disparity estimates in all image regions. Stereo correspondence algorithms usually require significant computation. The challenges are computational economy, accuracy and robustness. While a large number of algorithms for stereo matching have been developed, there still leaves the space for improvement especially when a new mathematical tool such as wavelet analysis becomes mature. The aim of the thesis is to investigate the stereo matching approach using wavelet transform with a view to producing efficient and dense disparity map outputs. After the shift invariance property of various wavelet transforms is identified, the main contributions of the thesis are made in developing and evaluating two wavelet approaches (the dyadic wavelet transform and complex wavelet transform) for solving the standard correspondence problem. This comprises an analysis of the applicability of dyadic wavelet transform to disparity map computation, the definition of a waveletbased similarity measure for matching, the combination of matching results from different scales based on the detectable minimum disparity at each scale and the application of complex wavelet transform to stereo matching. The matching method using the dyadic wavelet transform is through SSD correlation comparison and is in particular detailed. A new measure using wavelet coefficients is defined for similarity comparison. The approach applying a dual tree of complex wavelet transform to stereo matching is formulated through phase information. A multiscale matching scheme is applied for both the matching methods. Imaging testing has been made with various synthesised and real image pairs. Experimental results with a variety of stereo image pairs exhibit a good agreement with ground truth data, where available, and are qualitatively similar to published results for other stereo matching approaches. Comparative results show that the dyadic wavelet transform-based matching method is superior in most cases to the other approaches considered.
APA, Harvard, Vancouver, ISO, and other styles
3

Choe, Gwangwoo. "Merged arithmetic for wavelet transforms /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Masud, Shahid. "VLSI systems for discrete wavelet transforms." Thesis, Queen's University Belfast, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Silwal, Sharad Deep. "Bayesian inference and wavelet methods in image processing." Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/2355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Silva, Eduardo Antonio Barros da. "Wavelet transforms for image coding." Thesis, University of Essex, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Jiangfeng. "Wavelet packet division multiplexing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0002/NQ42889.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Long, Christopher J. "Wavelet methods in speech recognition." Thesis, Loughborough University, 1999. https://dspace.lboro.ac.uk/2134/14108.

Full text
Abstract:
In this thesis, novel wavelet techniques are developed to improve parametrization of speech signals prior to classification. It is shown that non-linear operations carried out in the wavelet domain improve the performance of a speech classifier and consistently outperform classical Fourier methods. This is because of the localised nature of the wavelet, which captures correspondingly well-localised time-frequency features within the speech signal. Furthermore, by taking advantage of the approximation ability of wavelets, efficient representation of the non-stationarity inherent in speech can be achieved in a relatively small number of expansion coefficients. This is an attractive option when faced with the so-called 'Curse of Dimensionality' problem of multivariate classifiers such as Linear Discriminant Analysis (LDA) or Artificial Neural Networks (ANNs). Conventional time-frequency analysis methods such as the Discrete Fourier Transform either miss irregular signal structures and transients due to spectral smearing or require a large number of coefficients to represent such characteristics efficiently. Wavelet theory offers an alternative insight in the representation of these types of signals. As an extension to the standard wavelet transform, adaptive libraries of wavelet and cosine packets are introduced which increase the flexibility of the transform. This approach is observed to be yet more suitable for the highly variable nature of speech signals in that it results in a time-frequency sampled grid that is well adapted to irregularities and transients. They result in a corresponding reduction in the misclassification rate of the recognition system. However, this is necessarily at the expense of added computing time. Finally, a framework based on adaptive time-frequency libraries is developed which invokes the final classifier to choose the nature of the resolution for a given classification problem. The classifier then performs dimensionaIity reduction on the transformed signal by choosing the top few features based on their discriminant power. This approach is compared and contrasted to an existing discriminant wavelet feature extractor. The overall conclusions of the thesis are that wavelets and their relatives are capable of extracting useful features for speech classification problems. The use of adaptive wavelet transforms provides the flexibility within which powerful feature extractors can be designed for these types of application.
APA, Harvard, Vancouver, ISO, and other styles
9

Cena, Bernard Maria. "Reconstruction for visualisation of discrete data fields using wavelet signal processing." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0014.

Full text
Abstract:
The reconstruction of a function and its derivative from a set of measured samples is a fundamental operation in visualisation. Multiresolution techniques, such as wavelet signal processing, are instrumental in improving the performance and algorithm design for data analysis, filtering and processing. This dissertation explores the possibilities of combining traditional multiresolution analysis and processing features of wavelets with the design of appropriate filters for reconstruction of sampled data. On the one hand, a multiresolution system allows data feature detection, analysis and filtering. Wavelets have already been proven successful in these tasks. On the other hand, a choice of discrete filter which converges to a continuous basis function under iteration permits efficient and accurate function representation by providing a “bridge” from the discrete to the continuous. A function representation method capable of both multiresolution analysis and accurate reconstruction of the underlying measured function would make a valuable tool for scientific visualisation. The aim of this dissertation is not to try to outperform existing filters designed specifically for reconstruction of sampled functions. The goal is to design a wavelet filter family which, while retaining properties necessary to preform multiresolution analysis, possesses features to enable the wavelets to be used as efficient and accurate “building blocks” for function representation. The application to visualisation is used as a means of practical demonstration of the results. Wavelet and visualisation filter design is analysed in the first part of this dissertation and a list of wavelet filter design criteria for visualisation is collated. Candidate wavelet filters are constructed based on a parameter space search of the BC-spline family and direct solution of equations describing filter properties. Further, a biorthogonal wavelet filter family is constructed based on point and average interpolating subdivision and using the lifting scheme. The main feature of these filters is their ability to reconstruct arbitrary degree piecewise polynomial functions and their derivatives using measured samples as direct input into a wavelet transform. The lifting scheme provides an intuitive, interval-adapted, time-domain filter and transform construction method. A generalised factorisation for arbitrary primal and dual order point and average interpolating filters is a result of the lifting construction. The proposed visualisation filter family is analysed quantitatively and qualitatively in the final part of the dissertation. Results from wavelet theory are used in the analysis which allow comparisons among wavelet filter families and between wavelets and filters designed specifically for reconstruction for visualisation. Lastly, the performance of the constructed wavelet filters is demonstrated in the visualisation context. One-dimensional signals are used to illustrate reconstruction performance of the wavelet filter family from noiseless and noisy samples in comparison to other wavelet filters and dedicated visualisation filters. The proposed wavelet filters converge to basis functions capable of reproducing functions that can be represented locally by arbitrary order piecewise polynomials. They are interpolating, smooth and provide asymptotically optimal reconstruction in the case when samples are used directly as wavelet coefficients. The reconstruction performance of the proposed wavelet filter family approaches that of continuous spatial domain filters designed specifically for reconstruction for visualisation. This is achieved in addition to retaining multiresolution analysis and processing properties of wavelets.
APA, Harvard, Vancouver, ISO, and other styles
10

Anton, Wirén. "The Discrete Wavelet Transform." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55063.

Full text
Abstract:
In this thesis we will explore the theory behind wavelets. The main focus is on the discrete wavelet transform, although to reach this goal we will also introduce the discrete Fourier transform as it allow us to derive important properties related to wavelet theory, such as the multiresolution analysis. Based on the multiresolution it will be shown how the discrete wavelet transform can be formulated and show how it can be expressed in terms of a matrix. In later chapters we will see how the discrete wavelet transform can be generalized into two dimensions, and discover how it can be used in image processing.
APA, Harvard, Vancouver, ISO, and other styles
11

Dragotti, Pier Luigi. "Wavelet footprints and frames for signal processing and communication /." [S.l.] : [s.n.], 2002. http://library.epfl.ch/theses/?nr=2559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ardolino, Richard S. "Wavelet-based signal processing of electromagnetic pulse generated waveforms." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Sep%5FArdolino.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2007.
Thesis Advisor(s): Tummala, Richard S. "September 2007." Description based on title screen as viewed on October 22, 2007. Includes bibliographical references (p. 83). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
13

NiBouch, M. "Design and FPGA implementations for discrete wavelet transforms." Thesis, Queen's University Belfast, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lutz, Steven S. "Hokua – A Wavelet Method for Audio Fingerprinting." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3247.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Andréasson, Thomas. "Signal Processing Using Wavelets in a Ground Penetrating Radar System." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1774.

Full text
Abstract:

This master's thesis explores whether time-frequency techniques can be utilized in a ground penetrating radar system. The system studied is the HUMUS system which has been developed at FOI, and which is used for the detection and classification of buried land mines.

The objective of this master's thesis is twofold. First of all it is supposed to give a theoretical introduction to the wavelet transform and wavelet packets, and also to introduce general time-frequency transformations. Secondly, the thesis presents and implements an adaptive method, which is used to perform the task of a feature extractor.

The wavelet theory presented in this thesis gives a first introduction to the concept of time-frequency transformations. The wavelet transform and wavelet packets are studied in detail. The most important goal of this introduction is to define the theoretical background needed for the second objective of the thesis. However, some additional concepts will also be introduced, since they were deemed necessary to include in an introduction to wavelets.

To illustrate the possibilities of wavelet techniques in the existing HUMUS system, one specific application has been chosen. The application chosen is feature extraction. The method for feature extraction described in this thesis uses wavelet packets to transform theoriginal radar signal into a form where the features of the signal are better revealed. One of the algorithms strengths is its ability to adapt itself to the kind of input radar signals expected. The algorithm will pick the "best" wavelet packet from a large number of possible wavelet packets.

The method we use in this thesis emanates from a previously publicized dissertation. The method proposed in that dissertation has been modified to the specific environment of the HUMUS system. It has also been implemented in MATLAB, and tested using data obtained by the HUMUS system. The results are promising; even"weak"objects can be revealed using the method.

APA, Harvard, Vancouver, ISO, and other styles
16

He, Zhenyu. "Writer identification using wavelet, contourlet and statistical models." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Tourshan, Khaled. "Parameterization of slant and slantlet/wavelet transforms with applications /." Thesis, Connect to Dissertations & Theses @ Tufts University, 2003.

Find full text
Abstract:
Thesis (Ph.D.)--Tufts University, 2003.
Adviser: Joseph P. Noonan. Submitted to the Dept. of Electrical Engineering. Includes bibliographical references (leaves 149-149). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Xin-Gong. "Application of wavelet transforms to seismic data processing and inversion." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq25094.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Parker, Kristen Michelle. "Watermarking with wavelet transforms." Master's thesis, Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-11062007-153859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bopardikar, Ajit S. "Speech Encryption Using Wavelet Packets." Thesis, Indian Institute of Science, 1995. http://hdl.handle.net/2005/153.

Full text
Abstract:
The aim of speech scrambling algorithms is to transform clear speech into an unintelligible signal so that it is difficult to decrypt it in the absence of the key. Most of the existing speech scrambling algorithms tend to retain considerable residual intelligibility in the scrambled speech and are easy to break. Typically, a speech scrambling algorithm involves permutation of speech segments in time, frequency or time-frequency domain or permutation of transform coefficients of each speech block. The time-frequency algorithms have given very low residual intelligibility and have attracted much attention. We first study the uniform filter bank based time-frequency scrambling algorithm with respect to the block length and number of channels. We use objective distance measures to estimate the departure of the scrambled speech from the clear speech. Simulations indicate that the distance measures increase as we increase the block length and the number of chan­nels. This algorithm derives its security only from the time-frequency segment permutation and it has been estimated that the effective number of permutations which give a low residual intelligibility is much less than the total number of possible permutations. In order to increase the effective number of permutations, we propose a time-frequency scrambling algorithm based on wavelet packets. By using different wavelet packet filter banks at the analysis and synthesis end, we add an extra level of security since the eavesdropper has to choose the correct analysis filter bank, correctly rearrange the time-frequency segments, and choose the correct synthesis bank to get back the original speech signal. Simulations performed with this algorithm give distance measures comparable to those obtained for the uniform filter bank based algorithm. Finally, we introduce the 2-channel perfect reconstruction circular convolution filter bank and give a simple method for its design. The filters designed using this method satisfy the paraunitary properties on a discrete equispaced set of points in the frequency domain.
APA, Harvard, Vancouver, ISO, and other styles
21

Tassignon, Hugo. "Solutions to non-stationary problems in wavelet space." Thesis, De Montfort University, 1997. http://hdl.handle.net/2086/13259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sun, Lu. "Geometric transformation and image singularity with wavelet analysis." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Jin, Shasha, and Ningcheng Gaoding. "Signal processing using the wavelet transform and the Karhunen-Loeve transform." Thesis, Högskolan Kristianstad, Sektionen för hälsa och samhälle, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-9752.

Full text
Abstract:
This degree project deals with Wavelet transform and Karhunen-Loeve transform. Through the mathematic description to understand and simulation to investigate the denoise ability of WT and the de-correlation ability of KLT. Mainly prove that the new algorithm which is the joint of these two algorithms is feasible.
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, Shui-Town. "Gear condition monitoring by wavelet transform of vibration signals." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chaiyaboonthanit, Thanit. "Image coding using wavelet transform and adaptive block truncation coding /." Online version of thesis, 1991. http://hdl.handle.net/1850/10913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Enfedaque, Montes Pablo. "GPU Architectures for Wavelet-based Image Coding Acceleration." Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/405310.

Full text
Abstract:
Els sistemes de codificació d'imatges moderns utilitzen tècniques amb alts requirements de comput per tal d'aconseguir comprimir imatges de manera eficient. Les aplicacions que fan us d'aquests codecs sovint tenen necesitats de processament en temps real. És habitual en aquests escenaris fer ús de hardware especialitzat com, per exemple, Field-Programmable Gate Arrays (FPGAs) o Applications-Specific Integrated Circuits (ASICs). No obstant, les GPUs, amb la seva arquitectura altament paral·lela orientada a rendiment, representen una alternativa atractiva en comparació al hardware especialitzat. Aquestes són reprogramables, energèticament eficients, es poden trobar a la majoria de sistemes i, sobretot, ofereixen un rendiment computacional molt competitiu. Els sistemes de codificació d'imatges basats en wavelets són aquells que utilitzen algun tipus de transformada wavelet abans de l'etapa de codificació de les dades. JPEG2000 és el sistema de codificació d'imatges basats en wavelet més representatiu. Diversos projectes d'investigació han intentat desenvolupar implementacions en GPU de JPEG2000, amb l'objectiu d'accelerar el sistema de codificació. Encara que algunes etapes del sistema són adequades per a la computació en GPUs, l'etapa de codificació de dades no exposa suficient paral·lelisme de granuralitat fina. A més, aquesta és l'etapa que requereix més recursos de comput (suposa un 75% del temps d'execució total) i representa, per tant, el coll d'ampolla del sistema. La investigació presentada en aquesta tesi es centra en la computació utilitzant GPUs de les etapes més rellevants dels sistemes de codificació d'imatges basats en wavelet: la transformada wavelet i l'etapa de codificació de dades. Aquesta tesi presenta tres contribucions principals: La primera, és una implementació de la Transformada Discreta Wavelet accelerada utilitzant GPUs. La implementació proposada aconsegueix una acceleració de 4x, respecte les solucions prèvies a l'estat de l'art; la segona, és l'anàlisi i reformulació de l’etapa de codificació de dades de JPEG2000. Es proposa un nou motor de compressió d'alt rendiment potencialment paral·lel: Bitplane Image Coding with Parallel Coefficient Processing (BPC-PaCo). BPC-PaCo reformula els mecanismes de codificació de dades, sense renunciar a cap funcionalitat dels sistemes de codificació tradicionals; l'última contribució d'aquesta tesis, presenta una implementació optimitzada per a GPU de BPC-PaCo. Es compara el seu rendiment amb les implementacions més competitives de JPEG2000, tant en CPU com en GPU, i es mostra com BPC-PaCo aconsegueix una millora en el temps d'execució de fins a 30x respecte les implementacions més rapides fins al moment.
Los sistemas de codificación de imágenes modernos utilizan técnicas con altos requisitos de cómputo para lograr comprimir imágenes de manera eficiente. Los codecs de imágenes son a menudo utilizados en aplicaciones que requieren procesamiento en tiempo real, en cuyos casos es común utilizar hardware especializado como, por ejemplo, Field-Programmable Gate Arrays (FGPAs) o Application-Specific Integrated Circuits (ASICs). No obstante, las GPUs tienen una arquitectura altamente paralela orientada a rendimiento que representa una alternativa atractiva en comparación con el hardware dedicado. Son reprogramables, energéticamente eficientes, se pueden encontrar en la mayoría de sistemas y, por encima de todo, ofrecen un rendimiento computacional muy competitivo. Los sistemas de codificación de imágenes basados en wavelet son aquellos que utilizan algún tipo de transformada wavelet antes de la etapa de codificación de datos. JPEG2000 es el sistema de codificación de imágenes basados en wavelet más representativo. Muchos proyectos de investigación han intentado desarrollar implementaciones en GPU de JPEG2000, con el objetivo de acelerar el sistema de codificación. Aunque algunas etapas del sistema son adecuadas para la computación en GPUs, la etapa de codificación de datos no expone suficiente paralelismo de granularidad fina. La codificación de datos es, además, la etapa que requiere más recursos de cómputo (supone un 75% del tiempo total de ejecución) y representa, por lo tanto, el cuello de botella del sistema. La investigación presentada en esta tesis se centra en la computación en GPU de las etapas más críticas de los sistemas de codificación de imágenes basados en wavelet: la transformada wavelet y la etapa de codificación de datos. Ésta tesis presenta tres contribuciones principales: la primera es una implementación de la Transformada Discreta Wavelet acelerada utilizando GPUs. La implementación propuesta ofrece un rendimiento computacional hasta 4 veces mayor, con respecto a las anteriores soluciones en el estado del arte; la segunda, es el análisis y reformulación de la etapa de codificación de datos de JPEG2000. Se propone un nuevo motor de codificación de alto rendimiento compatible con sistemas de cómputo paralelo: Bitplane Image Coding with Parallel Coefficient Processing (BPC-PaCo). BPC-PaCo reformula los mecanismos de la codificación de datos sin renunciar a ninguna de las funcionalidades avanzadas de los sistemas de codificación tradicionales; la última contribución de esta tesis presenta una implementación optimizada en GPU de BPC-PaCo. Se compara su rendimiento con las implementaciones de JPEG2000 más competitivas, tanto en CPU como en GPU, y se demuestra como BPC-PaCo consigue mejoras en tiempos de ejecución de hasta 30x respecto las implementaciones más rápidas.
Modern image coding systems employ computationally demanding techniques to achieve image compression. Image codecs are often used in applications that require real-time processing, so it is common in those scenarios to employ specialized hardware, such as Field-Programmable Gate Arrays (FPGAs) or Application-Specific Integrated Circuits (ASICs). GPUs are throughput-oriented, highly parallel architectures that represent an interesting alternative to dedicated hardware. They are software re-programmable, widely available, energy efficient, and they offer very competitive peak computational performance. Wavelet-based image coding systems are those that employ some kind of wavelet transformation before the data coding stage. Arguably, JPEG2000 is the most representative of those systems. Many research projects have tried to develop GPU implementations of JPEG2000 to speed up the coding pipeline. Although some stages of the pipeline are very suitable for GPU computing, the data coding stage does not expose enough fine-grained parallelism. Data coding is the most computationally demanding stage (75% of the total execution time) and represents the bottleneck of the pipeline. The research presented in this thesis focuses on the GPU computing of the most critical stages of wavelet-based image coding systems: the wavelet transform and the data coding stage. This thesis proposes three main contributions. The first is a GPU-accelerated implementation of the Discrete Wavelet Transform. The proposed implementation achieves speedups up to 4x with respect to the previous state-of-the-art GPU solutions. The second contribution is the analysis and reformulation of the data coding stage of JPEG2000. We propose a new parallel-friendly high performance coding engine: Bitplane Image Coding with Parallel Coefficient Processing (BPC-PaCo). BPC-PaCo reformulates the mechanisms of data coding, without renouncing to any of the advanced features of traditional data coding. The last contribution of this thesis presents an optimized GPU implementation of BPC-PaCo. It compares its performance with the most competitive JPEG2000 implementations in both CPU and GPU, revealing speedups up to 30x with respect to the fastest implementation.
APA, Harvard, Vancouver, ISO, and other styles
27

Khan, Ekram. "Efficient and robust wavelet based image/video coding techniques." Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Almeida, Luis Miguel Lima de. "All-optical processing based on integrated optics." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/13705.

Full text
Abstract:
Mestrado em Engenharia Eletrónica e Telecomunicações
During the last years, the demand for high data transfer rates in optical fiber communications has increased exponentially. Since image in its original format exactly as it is captured by the digital camera requires an enormous amount of storage capacity, it is important to develop a system that increases its amount of compression while preserving the important image’s information. In the topic of image’s compression, there are several transformation techniques used for data compression. Discrete Wavelet Transform (DWT) is one of the most commonly used, thanks to its multi-resolution transformation. This multi-resolution property allows to develop, not only a lossless compression method, from which the original image can be obtained exactly as it was before the transform, but also, a lossy method where it is not possible to obtain the original image. In this context, this thesis will develop the idea to apply the Haar wavelet transform using optical circuits. This concept will be analyzed, verifying the possibility of its implementation in the optical domain, using several methods, lossy and lossless, to conclude about the best compression method to apply to an image. Finally, the lossy method will be tested in the laboratory with different components and design the optical device able to accomplish the Haar wavelet transform.
Nos últimos anos, a procura por elevados ritmos de transferência de informação em comunicações óticas tem aumentado exponencialmente. Dado que imagem, no seu formato original exactamente como é captada pela câmara fotográfica ocupa enormes quantidades de espaço de armazenamento, torna-se importante desenvolver um sistema que aumente o seu grau de compressão, preservando as informações importantes da imagem. No tópico da compressão de imagem existem várias técnicas de transformação usadas para compressão de dados. A transformada discreta de onduleta é uma das mais usadas, graças ao uso da transformação em multiresolução. Esta propriedade de multi-resolução permite não só desenvolver métodos de compressão de imagem sem perdas, nos quais se obtém a imagem original exatamente como era antes da transformação, como também métodos com perdas, já não sendo possível obter a imagem original. Neste contexto, esta tese irá desenvolver a ideia de aplicar a transformada de onduleta de Haar usando circuitos óticos. Este conceito irá ser analisado, verificando a possibilidade da sua implementação no domínio ótico, usando vários métodos, com perdas e sem perdas, para concluir acerca do melhor método de compressão a aplicar a uma imagem. Por fim, o método com perdas irá ser testado no laboratório com diferentes componentes e desenhar o dispositivo ótico capaz de aplicar a transformada de onduleta de Haar.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhou, Meng. "Vibration Extraction Using Rolling Shutter Cameras." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34963.

Full text
Abstract:
Measurements of vibrations, such as sound hitting an object or running a motor, are widely used in industry and research. Traditional methods need either direct contact with the object or a laser vibrometer. Although computer vision methods have been applied to solve this problem, high speed cameras are usually preferred. This study employs a consumer level rolling shutter camera for extracting main frequency components of small vibrations. A rolling shutter camera exposes continuously over time on the vertical direction of the sensor, and produces images with shifted rows of objects. We utilize the rolling shutter effect to boost our capability to extract vibration frequencies higher than the frame rate. Assuming the vibration amplitude of the target results in a horizontal fronto-parallel component in the image, we compute the displacement of each row from a reference frame by our novel phase matching approach in the complex-valued Shearlet transform domain. So far the only way to process rolling shutter video for vibration extraction is with the Steerable Pyramid in a motion magnification framework. However, the Shearlet transform is well localized in scale, location and orientation, and hence better suited to vibration extraction than the Steerable Pyramid used in the high speed video approach. Using our rolling shutter approach, we manage to recover signals from 75Hz to 500Hz from videos of 30fps. We test our method by controlled experiments with a loudspeaker. We play sounds with certain frequency components and take videos of the loudspeaker's surface. Our approach recovers chirp signals as well as single frequency signals from rolling shutter videos. We also test with music and speech. Both experiments produce identifiable recovered audio.
APA, Harvard, Vancouver, ISO, and other styles
30

Hua, Jianping. "Topics in genomic image processing." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3244.

Full text
Abstract:
The image processing methodologies that have been actively studied and developed now play a very significant role in the flourishing biotechnology research. This work studies, develops and implements several image processing techniques for M-FISH and cDNA microarray images. In particular, we focus on three important areas: M-FISH image compression, microarray image processing and expression-based classification. Two schemes, embedded M-FISH image coding (EMIC) and Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis, have been introduced for M-FISH image compression and microarray image processing, respectively. In the expression-based classification area, we investigate the relationship between optimal number of features and sample size, either analytically or through simulation, for various classifiers.
APA, Harvard, Vancouver, ISO, and other styles
31

Rowley, Alexander. "Signal processing methods for cerebral autoregulation." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:3d85ab53-9c9b-4b50-98f2-2e67848e5da4.

Full text
Abstract:
Cerebral autoregulation describes the clinically observed phenomenon that cerebral blood flow remains relatively constant in healthy human subjects despite large systemic changes in blood pressure, dissolved blood gas concentrations, heart rate and other systemic variables. Cerebral autoregulation is known to be impaired post ischaemic stroke, after severe head injury, in patients suffering from autonomic dysfunction and under the action of various drugs. Cerebral auto-regulation is a dynamic, multivariate phenomenon. Sensitive techniques are required to monitor cerebral auto-regulation in a clinical setting. This thesis presents 4 related signal processing studies of cerebral autoregulation. The first study shows how consideration of changes in blood gas concentrations simultaneously with changes in blood pressure can improve the accuracy of an existing frequency domain technique for monitoring cerebral autoregulation from spontaneous fluctuations in blood pressure and a transcranial doppler measure of cerebral blood flow velocity. The second study shows how the continuous wavelet transform can be used to investigate coupling between blood pressure and near infrared spectroscopy measures of cerebral haemodynamics in patients with autonomic failure. This introduces time information into the frequency based assessment, however neglects the contribution of blood gas concentrations. The third study shows how this limitation can be resolved by introducing a new time-varying multivariate system identification algorithm based around the dual tree undecimated wavelet transform. All frequency and time-frequency domain methods of monitoring cerebral autoregulation assume linear coupling between the variables under consideration. The fourth study therefore considers nonlinear techniques of monitoring cerebral autoregulation, and illustrates some of the difficulties inherent in this form of analysis. The general approach taken in this thesis is to formulate a simple system model; usually in the form of an ODE or a stochastic process. The form of the model is adapted to encapsulate a hypothesis about features of cerebral autoregulation, particularly those features that may be difficult to recover using existing methods of analysis. The performance of the proposed method of analysis is then evaluated under these conditions. After this testing, the techniques are then applied to data provided by the Laboratory of Human Cerebrovascular Physiology in Alberta, Canada, and the National Hospital for Neurology and Neurosurgery in London, UK.
APA, Harvard, Vancouver, ISO, and other styles
32

Lê, Nguyên Khoa 1975. "Time-frequency analyses of the hyperbolic kernel and hyperbolic wavelet." Monash University, Dept. of Electrical and Computer Systems Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/8299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Harmse, Wynand. "Wavelet-based speech enhancement : a statistical approach." Thesis, Stellenbosch : University of Stellenbosch, 2004. http://hdl.handle.net/10019.1/16336.

Full text
Abstract:
Thesis (MScIng)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: Speech enhancement is the process of removing background noise from speech signals. The equivalent process for images is known as image denoising. While the Fourier transform is widely used for speech enhancement, image denoising typically uses the wavelet transform. Research on wavelet-based speech enhancement has only recently emerged, yet it shows promising results compared to Fourier-based methods. This research is enhanced by the availability of new wavelet denoising algorithms based on the statistical modelling of wavelet coefficients, such as the hidden Markov tree. The aim of this research project is to investigate wavelet-based speech enhancement from a statistical perspective. Current Fourier-based speech enhancement and its evaluation process are described, and a framework is created for wavelet-based speech enhancement. Several wavelet denoising algorithms are investigated, and it is found that the algorithms based on the statistical properties of speech in the wavelet domain outperform the classical and more heuristic denoising techniques. The choice of wavelet influences the quality of the enhanced speech and the effect of this choice is therefore examined. The introduction of a noise floor parameter also improves the perceptual quality of the wavelet-based enhanced speech, by masking annoying residual artifacts. The performance of wavelet-based speech enhancement is similar to that of the more widely used Fourier methods at low noise levels, with a slight difference in the residual artifact. At high noise levels, however, the Fourier methods are superior.
AFRIKAANSE OPSOMMING: Spraaksuiwering is die proses waardeur agtergrondgeraas uit spraakseine verwyder word. Die ekwivalente proses vir beelde word beeldsuiwering genoem. Terwyl spraaksuiwering in die algemeen in die Fourier-domein gedoen word, gebruik beeldsuiwering tipies die golfietransform. Navorsing oor golfie-gebaseerde spraaksuiwering het eers onlangs verskyn, en dit toon reeds belowende resultate in vergelyking met Fourier-gebaseerde metodes. Hierdie navorsingsveld word aangehelp deur die beskikbaarheid van nuwe golfie-gebaseerde suiweringstegnieke wat die golfie-ko¨effisi¨ente statisties modelleer, soos die verskuilde Markovboom. Die doel van hierdie navorsingsprojek is om golfie-gebaseerde spraaksuiwering vanuit ‘n statistiese oogpunt te bestudeer. Huidige Fourier-gebaseerde spraaksuiweringsmetodes asook die evalueringsproses vir sulke algoritmes word bespreek, en ‘n raamwerk word geskep vir golfie-gebaseerde spraaksuiwering. Verskeie golfie-gebaseerde algoritmes word ondersoek, en daar word gevind dat die metodes wat die statistiese eienskappe van spraak in die golfie-gebied gebruik, beter vaar as die klassieke en meer heuristiese metodes. Die keuse van golfie be¨ınvloed die kwaliteit van die gesuiwerde spraak, en die effek van hierdie keuse word dus ondersoek. Die gebruik van ‘n ruisvloer parameter verhoog ook die kwaliteit van die golfie-gesuiwerde spraak, deur steurende residuele artifakte te verberg. Die golfie-metodes vaar omtrent dieselfde as die klassieke Fourier-metodes by lae ruisvlakke, met ’n klein verskil in residuele artifakte. By ho¨e ruisvlakke vaar die Fouriermetodes egter steeds beter.
APA, Harvard, Vancouver, ISO, and other styles
34

Liao, Zhiwu. "Image denoising using wavelet domain hidden Markov models." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Júnior, Sylvio Barbon. "Dynamic Time Warping baseado na transformada wavelet." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-15042008-211812/.

Full text
Abstract:
Dynamic Time Warping (DTW) é uma técnica do tipo pattern matching para reconhecimento de padrões de voz, sendo baseada no alinhamento temporal de um sinal com os diversos modelos de referência. Uma desvantagem da DTW é o seu alto custo computacional. Este trabalho apresenta uma versão da DTW que, utilizando a Transformada Wavelet Discreta (DWT), reduz a sua complexidade. O desempenho obtido com a proposta foi muito promissor, ganhando em termos de velocidade de reconhecimento e recursos de memória consumidos, enquanto a precisão da DTW não é afetada. Os testes foram realizados com alguns fonemas extraídos da base de dados TIMIT do Linguistic Data Consortium (LDC)
Dynamic TimeWarping (DTW) is a pattern matching technique for speech recognition, that is based on a temporal alignment of the input signal with the template models. One drawback of this technique is its high computational cost. This work presents a modified version of the DTW, based on the DiscreteWavelet Transform (DWT), that reduces the complexity of the original algorithm. The performance obtained with the proposed algorithm is very promising, improving the recognition in terms of time and memory allocation, while the precision is not affected. Tests were performed with speech data collected from TIMIT corpus provided by Linguistic Data Consortium (LDC).
APA, Harvard, Vancouver, ISO, and other styles
36

Stromme, Oyvind. "On the applicability of wavelet transforms to image and video compression." Thesis, University of Strathclyde, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Grant, Jeremy. "Wavelet-Based Segmentation of Fluorescence Microscopy Images in Two and Three Dimensions." Fogler Library, University of Maine, 2008. http://www.library.umaine.edu/theses/pdf/GrantJ2008.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Renfrew, Mark E. "A Comparison of Signal Processing and Classification Methods for Brain-Computer Interface." Case Western Reserve University School of Graduate Studies / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1246474708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Shuo. "MALDI-TOF MS data processing using wavelets, splines and clustering techniques." [Johnson City, Tenn. : East Tennessee State University], 2004. http://etd-submit.etsu.edu/etd/theses/available/etd-1112104-113123/unrestricted/ChenS121404f.pdf.

Full text
Abstract:
Thesis (M.S.)--East Tennessee State University, 2004.
Title from electronic submission form. ETSU ETD database URN: etd-1112104-113123 Includes bibliographical references. Also available via Internet at the UMI web site.
APA, Harvard, Vancouver, ISO, and other styles
40

Janga, Aparna. "REFLECTED IMAGE PROCESSING FOR SPECULAR WELD POOL SURFACE MEASUREMENT." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/502.

Full text
Abstract:
The surface of the weld pool contains information that can be exploited to emulate a skilled human welder to better understand and control the welding process. Of the existing techniques, the method that uses the pool's specular nature to an advantage and which is relatively more cost effective, and suitable for welding environment is the one that utilizes reflected images to reconstruct 3D weld pool surface by using structured light and image processing techniques. In this thesis, an improvement has been made to the existing method by changing welding direction to obtain a denser reflected dot-matrix pattern allowing more accurate surface measurement. Then, the reflected images, obtained by capturing the reflection of a structured laser dot-matrix pattern from the pool surface through a high-speed camera with a narrow band-pass filter, are processed by a newly proposed algorithm to find the position of each reflected dot relative to its actual projection dot. This is a complicated process owing to the increased density of dots and noise induced due to the harsh environment. The obtained correspondence map may later be used by a surface reconstruction algorithm to derive the three-dimensional pool surface based on the reflection law.
APA, Harvard, Vancouver, ISO, and other styles
41

Kim, Il-Ryeol. "Wavelet domain partition-based signal processing with applications to image denoising and compression." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 2.98 Mb., 119 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3221054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wei, Jie. "Foveate wavelet transform and its applications in digital video processing, acquisition, and indexing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ37768.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lorenz, Dirk. "Wavelet shrinkage in signal & image processing : an investigation of relations and equivalences." kostenfrei, 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975601687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Al-Jawad, Naseer. "Exploiting statistical properties of wavelet coefficients for image/video processing and analysis tasks." Thesis, University of Buckingham, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.601354.

Full text
Abstract:
In this thesis the statistical properties of wavelet transform high frequency sub-bands has been used and exploited in three main different applications. These applications are; Image/video feature preserving compression, Face Biometric content based video retrieval and Face feature extraction for face verification and recognition. The main idea of this thesis was also used previously in watermarking (Dietze 2005) where the watermark can be hidden automatically near the significant features in the wavelet sub-bands. The idea is also used in image compression where special integer compression applied on low constrained devices (Ehlers 2008). In image quality measurement, Laplace Distribution Histogram (LDH) also used to measure the image quality. The theoretical LOH of any high frequency wavelet sub-band can match the histogram produced from the same high frequency wavelet sub-band of a high quality picture, where the noisy or blurred one can have a LOH which can be fitted to the theoretical one (Wang and Simoncelli 2005). Some research used the idea of wavelet high frequency sub-band features extraction implicitly, in this thesis we are focussed explicitly on using the statistical properties of the wavelet sub-bands in its multi-resolution wavelet transform. The fact that each high frequency wavelet sub-band frequencies have a Laplace Distribution (LO) (or so called General Gaussian distribution) has been mentioned in the literature. Here the relation between the statistical properties of the wavelet high frequency sub-bands and the feature extractions is well established. LOH has two tails, this make the LOH shape either symmetrical or skewed to the left, or the right This symmetry or skewing is normally around the mean which is theoretically equal to zero. In our study we paid a deep attention for these tails, these tails actually represent the image significant features which can be mapped from the wavelet domain to the spatial domain. The features can be maintained, accessed, and modified very easily using a certain threshold.
APA, Harvard, Vancouver, ISO, and other styles
45

Al-Jawad, Neseer. "Exploiting Statical Properties of Wavelet Coefficients for image/Video Processing and Analysis Tasks." Thesis, University of Exeter, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wei, Hua-Liang. "A wavelet-based approach for nonlinear system identification and non-stationary signal processing." Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bsoul, Abed Al-Raoof. "PROCESSING AND CLASSIFICATION OF PHYSIOLOGICAL SIGNALS USING WAVELET TRANSFORM AND MACHINE LEARNING ALGORITHMS." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/258.

Full text
Abstract:
Over the last century, physiological signals have been broadly analyzed and processed not only to assess the function of the human physiology, but also to better diagnose illnesses or injuries and provide treatment options for patients. In particular, Electrocardiogram (ECG), blood pressure (BP) and impedance are among the most important biomedical signals processed and analyzed. The majority of studies that utilize these signals attempt to diagnose important irregularities such as arrhythmia or blood loss by processing one of these signals. However, the relationship between them is not yet fully studied using computational methods. Therefore, a system that extract and combine features from all physiological signals representative of states such as arrhythmia and loss of blood volume to predict the presence and the severity of such complications is of paramount importance for care givers. This will not only enhance diagnostic methods, but also enable physicians to make more accurate decisions; thereby the overall quality of care provided to patients will improve significantly. In the first part of the dissertation, analysis and processing of ECG signal to detect the most important waves i.e. P, QRS, and T, are described. A wavelet-based method is implemented to facilitate and enhance the detection process. The method not only provides high detection accuracy, but also efficient in regards to memory and execution time. In addition, the method is robust against noise and baseline drift, as supported by the results. The second part outlines a method that extract features from ECG signal in order to classify and predict the severity of arrhythmia. Arrhythmia can be life-threatening or benign. Several methods exist to detect abnormal heartbeats. However, a clear criterion to identify whether the detected arrhythmia is malignant or benign still an open problem. The method discussed in this dissertation will address a novel solution to this important issue. In the third part, a classification model that predicts the severity of loss of blood volume by incorporating multiple physiological signals is elaborated. The features are extracted in time and frequency domains after transforming the signals with Wavelet Transformation (WT). The results support the desirable reliability and accuracy of the system.
APA, Harvard, Vancouver, ISO, and other styles
48

Forsberg, Axel. "A Wavelet-Based Surface Electromyogram Feature Extraction for Hand Gesture Recognition." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39766.

Full text
Abstract:
The research field of robotic prosthetic hands have expanded immensely in the last couple of decades and prostheses are in more commercial use than ever. Classification of hand gestures using sensory data from electromyographic signals in the forearm are primary for any advanced prosthetic hand. Improving classification accuracy could lead to more user friendly and more naturally controlled prostheses. In this thesis, features were extracted from wavelet transform coefficients of four channel electromyographic data and used for classifying ten different hand gestures. Extensive search for suitable combinations of wavelet transform, feature extraction, feature reduction, and classifier was performed and an in-depth comparison between classification results of selected groups of combinations was conducted. Classification results of combinations were carefully evaluated with extensive statistical analysis. It was shown in this study that logarithmic features outperforms non-logarithmic features in terms of classification accuracy. Then a subset of all combinations containing only suitable combinations based on the statistical analysis is presented and the novelty of these results can direct future work for hand gesture recognition in a promising direction.
APA, Harvard, Vancouver, ISO, and other styles
49

Ahmadian, Alireza. "Flexible medical image transmission and compression schemes using multiresolution orthagonal wavelet transform." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yarham, Carson, Daniel Trad, and Felix J. Herrmann. "Curvelet processing and imaging: adaptive ground roll removal." Canadian Society of Exploration Geophysicists, 2004. http://hdl.handle.net/2429/519.

Full text
Abstract:
In this paper we present examples of ground roll attenuation for synthetic and real data gathers by using Contourlet and Curvelet transforms. These non-separable wavelet transforms are locoalized both (x,t)- and (k,f)-domains and allow for adaptive seperation of signal and ground roll. Both linear and non-linear filtering are discussed using the unique properties of these basis that allow for simultaneous localization in the both domains. Eventhough, the linear filtering techniques are encouraging the true added value of these basis-function techniques becomes apparent when we use these decompositions to adaptively substract modeled ground roll from data using a non-linear thesholding procedure. We show real and synthetic examples and the results suggest that these directional-selective basis functions provide a usefull tool for the removal of coherent noise such as ground roll
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography