Dissertations / Theses on the topic 'De-noising'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 39 dissertations / theses for your research on the topic 'De-noising.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Schwartz, David, and David Schwartz. "Navigational Neural Coding and De-noising." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/625322.
Full textFike, Amanda(Amanda J. ). "De-noising and de-blurring of images using deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123266.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (page 12).
Deep Neural Networks (DNNs) [1] are often used for image reconstruction, but perform better reconstructing the low frequencies of the image than the high frequencies. This is especially the case when using noisy images. In this paper, we test using a Learning Synthesis Deep Neural Network (LS-DNN) [2] in combination with BM3D [3], an off the shelf de-noising tool, to generate images, attempting to decouple the de-noising and de-blurring steps to reconstruct noisy, blurry images. Overall, the LS-DNN performed similarly to the DNN trained only with respect to the ground truth images, and decoupling the de-noising and de-blurring steps underperformed compared to the results of images de-blurred and de-noised simultaneously with a DNN.
by Amanda Fike.
S.B.
S.B. Massachusetts Institute of Technology, Department of Mechanical Engineering
Chen, Guangyi. "Applications of wavelet transforms in pattern recognition and de-noising." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0006/MQ43552.pdf.
Full textKhorbotly, Sami. "DESIGN AND IMPLEMENTATION OF LOW COST DE-NOISING SYSTEMS FOR REAL-TIME CONTROL APPLICATIONS." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1180976720.
Full textSawant, Rupesh Prakash. "Bio-Particle Counting and Sizing Using Micro-Machined Multichannel Coulter Counter with Wavelet Based De-Noising." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1196456801.
Full textShafri, Helmi Zulhaidi Mohd. "An assessment of the potential of wavelet-based de-noising in the analysis of remotely sensed data." Thesis, University of Nottingham, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397592.
Full textVrba, Filip. "Odstranění hluku magnetické rezonance v nahrávkách řeči." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442573.
Full textVenter, Nielen Christoff. "The effects of empirical mode decomposition based on de-noising techniques in improving detection of directly stimulated skeletal muscle response." Thesis, University of Cape Town, 2013. http://hdl.handle.net/11427/3213.
Full textPalaniappan, Prashanth. "De-noising of Real-time Dynamic Magnetic Resonance Images by the Combined Application of Karhunen-Loeve Transform (KLT) and Wavelet Filtering." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357269157.
Full textCarter, Drew Davis. "Characterisation of cardiac signals using level crossing representations." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130760/1/Drew_Carter_Thesis.pdf.
Full textFrigo, Guglielmo. "Compressive Sensing Applications in Measurement: Theoretical issues, algorithm characterization and implementation." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424133.
Full textNell'ambito della Scienza dell'Informazione, il problema dell'acquisizione dei segnali è intimamente connesso alla progettazione e implementazione di efficienti algoritmi e procedure capaci di estrapolare e codificare il contenuto informativo contenuto nel segnale. Per oltre cinquant'anni, il riferimento in quest'ambito è stato rappresentato dal teorema di campionamento di Shannon e la corrispondente definizione di informazione in termini di estensione spettrale del segnale. La società contemporanea si fonda su di un pressoché incessante ed istantaneo scambio di informazioni, che vengono veicolate per la maggior parte in formato digitale. In siffatto contesto, i moderni dispositivi di comunicazione sono chiamati a gestire notevoli moli di dati, seguendo un consueto protocollo operativo che prevede acquisizione, elaborazione e memorizzazione. Nonostante l'incessante sviluppo tecnologico, il protocollo di acquisizione convenzionale è sottoposto a sempre crescente pressione e richiede un carico computazionale non proporzionale al reale contenuto informativo del segnale. Recentemente, un nuovo paradigma di acquisizione, noto con il nome di Campionamento Compresso, va diffondendosi tra i diversi settori della Scienza dell'Informazione. Questa innovativa teoria di campionamento si fonda su due principi fondamentali: sparsità del segnale e incoerenza del campionamento, e li sfrutta per acquisire il segnale direttamente in una versione condensata, compressa appunto. La frequenza di campionamento è collegata al tasso di aggiornamento dell'informazione, piuttosto che all'effettiva estensione spettrale del segnale. Dato un segnale sparso, il suo contenuto informativo può essere ricostruito a partire da quello che potrebbe sembrare un insieme incompleto di misure, al costo di un maggiore carico computazionale della fase di ricostruzione. La mia tesi di dottorato si basa sulla teoria del Campionamento Compresso e illustra come i concetti di sparsità e incoerenza possano essere sfruttati per sviluppare efficienti protocolli di campionamento e per comprendere appieno le sorgenti di incertezza che gravano sulle misure. L'attività di ricerca ha riguardato aspetti sia teorici sia implementativi, traendo spunto da contesti applicativi di misura che spaziano dalle comunicazioni a radio frequenza alla stima dei sincrofasori e all'indagine dell'attività neurologica. La tesi è organizzata in quattro capitoli ove i contributi più significativi includono: • la definizione di un modello unificato per i sistemi di acquisizione di segnali sparsi, con particolare attenzione alle implicazioni dovute alle assunzioni di sparsità e incoerenza; • caratterizzazione delle principali famiglie algoritmiche per la ricostruzione di segnali sparsi, con particolare attenzione all'impatto del rumore additivo sull'accuratezza delle stime; • implementazione e validazione sperimentale di un algoritmo di campionamento compresso capace di fornire accurate informazioni preliminari e opportuni dati pre-elaborati per un contesto applicativo di analizzatore vettoriale o di radio cognitiva; • sviluppo e caratterizzazione fi un algoritmo di campionamento compresso per super-risoluzione nell'ambito dell'analisi spettrale nel dominio della trasformata discreta di Fourier (DFT); • definizione di un dizionario sovra-completo che renda conto esplicitamente dell'effetto di leakage spettrale; • indagine dei cosiddetti approcci di stima off-the-grid, mediante un'opportuna combinazione di super-risoluzione mediante campionamento compresso e interpolazione polare dei coefficienti DFT; • analisi del concetto di sparsità entro il contesto dei segnali quasi-stazionari, sottolineando l'importanza dei modelli di segnali a sparsità tempo-variante; • definizione di un modello del contenuto spettrale del segnale attraverso campionamento compresso da utilizzarsi in applicazioni di analisi spettrale in condizioni dinamiche mediante trasformata di Taylor-Fourier.
Zbranek, Lukáš. "Moderní metody zvýrazňování statických MR obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218181.
Full textCronvall, Per. "Vektorkvantisering för kodning och brusreducering." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2377.
Full textThis thesis explores the possibilities of avoiding the issues generally associated with compression of noisy imagery, through the usage of vector quantization. By utilizing the learning aspects of vector quantization, image processing operations such as noise reduction could be implemented in a straightforward way. Several techniques are presented and evaluated. A direct comparison shows that for noisy imagery, vector quantization, in spite of it's simplicity, has clear advantages over MPEG-4 encoding.
Gómez-Navarro, Laura. "Techniques de débruitage d'image pour améliorer l'observabilité de la fine échelle océanique par SWOT." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALU024.
Full textSea Surface Height (SSH) observations describing scales in the range 10 - 100 km are crucial to better understand energy transfers across scales in the open ocean and to quantify vertical exchanges of heat and biogeochemical tracers. The Surface Water Ocean Topography (SWOT) mission is a new wide-swath altimetric satellite which is planned to be launched in 2022. SWOT will provide information on SSH at a kilometric resolution, but uncertainties due to various sources of errors will challenge our capacity to extract the physical signal of structures below a few tens of kilometers. Filtering SWOT noise and errors is a key step towards an optimal interpretation of the data.The aim of this study is to explore image de-noising techniques to assess the capabilities of the future SWOT data to resolve the oceanic fine scales. Pseudo-SWOT data are generated with the SWOT simulator for Ocean Science, which uses as input the SSH outputs from high-resolution Ocean General Circulation Models (OGCMs). Several de-noising techniques are tested, to find the one that renders the most accurate SSH and its derivatives fields while preserving the magnitude and shape of the oceanic features present. The techniques are evaluated based on the root mean square error, spectra and other diagnostics.In Chapter 3, the pseudo-SWOT data for the Science phase is analyzed to assess the capabilities of SWOT to resolve the meso- and submesoscale in the western Mediterranean. A Laplacian diffusion de-noising technique is implemented allowing to recover SSH, geostrophic velocity and relative vorticity down to 40 - 60 km. This first step allowed to adequately observe the mesoscale, but space is left for improvement at the submesoscale, specially in better preserving the intensity of the SSH signal.In Chapter 4, another de-noising technique is explored and implemented in the same region for the satellite's fast-sampling phase. This technique is motivated by recent advances in data assimilation techniques to remove spatially correlated errors based on SSH and its derivatives. It aims at retrieving accurate SSH derivatives, by recovering their structure and preserving their magnitude. A variational method is implemented which can penalize the SSH derivatives of first, second, third order or a combination of them. We find that the best parameterization is based on a second order penalization, and find the optimal parameters of this setup. Thanks to this technique the wavelengths resolved by SWOT in this region are reduced by a factor of 2, whilst preserving the magnitude of the SSH fields and its derivatives.In Chapter 5, we investigate the finest spatial scale that SWOT could resolve after de-noising in several regions, seasons and using different OGCMs. Our study focuses on different regions and seasons in order to document the variety of regimes that SWOT will sample. The de-noising algorithm performs well even in the presence of intense unbalanced motions, and it systematically reduces the smallest resolvable wavelength. Advanced de-noising algorithms also allow to reliably reconstruct SSH gradients (related to geostrophic velocities) and second order derivatives (related to geostrophic vorticity). Our results also show that a significant uncertainty remains about SWOT's finest resolved scale in a given region and season because of the large spread in the level of variance predicted among our high-resolution ocean model simulations.The de-noising technique developed, implemented and tested in this doctoral thesis allows to recover, in some cases, SWOT spatial scales as low as 15 km. This method is a very useful contribution to achieving the objectives of the SWOT mission. The results found will help better understand the ocean's dynamics and oceanic features and their role in the climate system
Al, Rababa'A Abdel Razzaq. "Uncovering hidden information and relations in time series data with wavelet analysis : three case studies in finance." Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/25961.
Full textRomanenko, Ilya. "Novel image processing algorithms and methods for improving their robustness and operational performance." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16340.
Full textKhalil, Toni. "Processus d’évaluation de la qualité de l’imagerie médicale et outils d’aide à la décision basée sur la connaissance." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0351.
Full textThe great progress that medical imaging has offered in the medical sector on the diagnostic level (Conventional Radiology, Computed Tomography, Nuclear Magnetic Resonance and Interventional Radiology) has pushed medicine to go through this area as the first choice. With an ever-increasing number of diagnostic images produced each year, as well as the recommendations of international organizations requiring low-dose irradiation resulting in enormous noise that can distort the diagnosis, Artificial Intelligence (AI) de-noising methods offer an opportunity to meet growing demand. In this thesis, we quantify the effect of AI-based de-noising on X-ray textural parameters with respect to a convolutional neural network.The study was based on the principle of characterizing the radiographic noise resulting from an X-ray of a water phantom and, generating this noise in a standard dose radiography aimed at producing artificially noisy images, and this in order to be able to feed a neural network by thousands of images to ensure its learning phase. After the learning phase, the testing phase and the inference, human chest X-rays were extracted from the archive to validate the de-noising on human X-rays in RGB and in “greyscale”. The study was done through a water phantom for ethical reasons in order to avoid irradiating people, avoiding voluntary and involuntary patient movements, and ensuring a study based on a homogeneous material (water) which constitutes the majority of the human body. This study is carried out on the one hand on 17 X-rays of a water phantom with different exposure doses to study the noise distribution on different gray scale values and, on the other hand on 25 X-rays divided into 5 groups of 5 images each taken with the same exposure dose without and with adjacent obstacles to study the gain effect of the flat panel detector chosen as the pre-processing means. The noise distribution was detected on two gray levels, i.e. 160 and 180 respectively, and showed a higher level of noise on the 160 level where the absorption of the X-ray beam is greater and, consequently, the quantum effect is most important. Noise scatter diagrams on these two levels have been shown. On the other hand, the presence of obstacles in the same image showed an absorption directly proportional to the number of obstacles next to the water phantom, which triggered a gain factor of the detector which, in its role produces nonlinear trace noise. Texture characteristics of AI-de-noised images compared to artificially noisy radiographs were compared with a peak signal-to-noise ratio (PSNR) coefficient. Features with increased PSNR values on RGB images and on greyscale images were considered to be consistent. A test to compare absolute values between AI-de-noised and artificially noisy images was performed. The results of the concordant features report were (38.05/30.06) -100 (26.58%) improvement in RGB versus (35.93/22.21) - 100 (61.77%) improvement in ‘greyscale'. In conclusion, applying AI-based de-noising on X-ray images retains most of the texture information of the image. AI-based de-noising in low-dose radiography is a very promising approach because it adapts de-noising, preserving information where it should
Lin, Lian-Da, and 林良達. "Study of De-noising Techniques-Applied to Image Restoration." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/89439193033598738809.
Full text國立海洋大學
電機工程學系
86
A new algorithm incorporated with standard median filtering is proposed to effectively remove impulsive noise in image processing. This computationally efficient approach first classifies input pixels and then performs median filtering process. Simulation results show that the proposed scheme, regardless of high SNR or low SNR, displays superior mean square error (MSE) over standard median filter. Threshold estimation is a critical step in the Waveshrink method which aims to produce a faithful replica of the uncorrupted input signal. Empirical results show, however, that Waveshrink thresholds (eitherMinimax or Universal) are often too large or too small for achieving optimal results. Alternatively, we present an intuitive approach useful for estimating better thresholds that significantly improve the de-noising performance.
Yu, Chen Kuan, and 陳冠宇. "An Improved Wavelet Thresholding Method for De-Noising Electrocardiogram Signals." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/28576494141875361473.
Full text輔仁大學
資訊工程學系
97
Abstract The electrocardiogram (ECG) signal records the electrical activity of the heart and the signal is widely used for diagnosis of heart diseases. However, the ECG signals are easy to be interfered with by the different noises. A de-noising method is often used to filter noise and the produced ECG is then used to help physicians to diagnose cardiovascular disease. In recent years, several de-noising methods based on discrete wavelet transform (DWT) are proposed to deal with the problem of extracting the weak ECG signal in a strong noisy environment. Although the related methods have their strengths, there is room for further studies and improvements. In this paper, we therefore propose an improved wavelet thresholding method for de-noising ECG signals and investigate additive Gaussian noise to various ECG signals from MIT-BIH database. From the experimental results, the proposed approach outperforms the existing thresholding methods in both measures of signal-to-noise ratio (SNR) and Root Mean Square Error (RMSE). Moreover, our approach remains the features of ECG signals and has better visual performance.
Tsao, Chien-Kung, and 曹鍵滎. "Improcments of Wavelet-Shrinkage for De-noising of Nonstationary Signal." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/00099785220522681752.
Full text國立海洋大學
電機工程學系
88
Denosing processing of speech random signals is one of the important and challenging topics in modern signal processing. The wavelet-shrinkage method is an important scheme for denosing processing. In the wavelet shrinkage method, the wavelet coefficients of the noisy signal are obtained by wavelet transform. These coefficients are used to estimate a suitable threshold for shrinkage of the original wavelet coefficients. After shrinkage, the reconstructed signal can be generated from the shrank wavelet coefficients using inverse wavelet transform. Though the wavelet shrinkage method is straight forward, it does not perform well in some cases. The empirical Wiener filtering is thus included to enhance the denosing ability. Nevertheless the performance of the empirical Wiener filtering method is not good in high SNR case. The cycle-spin method and wavelet shrinkage are combined for denoising processing to get better results than that of the empirical Wiener filtering method. However, the cycle-spin method needs a lot of processing time. The undecimated wavelet shrinkage method is introduced to improve the efficiency of the cycle-spin method. In this thesis, some improved algorithms for denoising of nonstationary signals are investigated and discussed.
Yu-Jen, Tseng, and 曾裕仁. "De-noising of Left Ventricular Myocardial Boundaries in Magnetic Resonance Images." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/37462083657997862929.
Full text大葉大學
工業工程研究所
88
Magnetic Resonance Imaging (MRI) is one of the most powerful radiological tools for diagnosis. MRI system is noninvasive and also provides the clear image to diagnosis the measuring of endocardial border and epicardial border in Left Ventricular. Detection of endocardial and epicardial borders of Left Ventricule can provide effective data for diagnose the heart disease such as Cardiomegalia and myocardial infarction. Because dynamic organs generate a huge number image production form MRI,it takes a long time to identify by using the manual tracing method. An effective computer aided diagnostic system is essential to maintain quality and reduce operating costs. By combining Wavelet-based images enhancement algorithm and dynamic programming based border detection algorithm,the endocardial and epicardial borders in Left Ventricule can be automatically measured. However,the detected borders are not smooth. Because the actual myocardial wall is smooth, the ideal borders should be smoothly closed curve. The purpose of this research is to apply digital filter to de-noise the automatically detected borders, which increases the accuracy of measurements. In this thesis, a wavelet-based de-noising technique and least-mean-square adaptive filter to de-noise the endocardial and epicardial borders. Experimental results show that the wavelet-based technique provides better performance than least-mean-square adaptive filter.
Hong, Wun-De, and 洪文德. "Wavelet Theory-based De-noising FPGA For Power Line Communication Using OFDM." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/cw6wwn.
Full text中原大學
電機工程研究所
99
Due to technology progress nowadays, the power-line at home can not only transfer power but also can transmit digital signals using modulation. The thesis uses OFDM for high-speed data transmission. This thesis has three parts: First, the sending part includes FEC, IFFT, I/Q modulation and so on. Second, the receiving part consists of demodulation, FFT, decoding and so on. Third, a coupling circuit is included. Finally, wavelet filter and extensive verification were used to reduce the error rate statistics on the experiments to achieve the goal. This thesis conducts verification on the practice and analysis on theory. This thesis considers AWGN, different lengths and loads. The parallel power line of different lengths and loads that will not change BER but BER will be changed by adding AWGN. Wavelet filter plays an important role to reduce the noise for power-line in this thesis.
tu, cin-hong, and 涂欽鴻. "Enhanced Contour Detection Using Phase Preserving De-noising Correction in Ultrasound Images." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/86866706310347855875.
Full text國立東華大學
資訊工程學系
94
According to the growth of national income, people nowadays pay much more attention to the health issues; Corresponsively the medical care gets improved as well. Today, we have advanced medical devices such as ultrasonic instrument, computed tomography, position emission tomography and magnetic resonance imaging, in which the ultrasonic instruments are now in widespread used because of low-cost, non-side effect, mobile and non-aggressive superiorities. Ultrasonic instruments can be used as the first step medical inspection, and have become one of the popular medical instruments. Therefore,the thesis focuses on the ultrasonic images. Doctor can diagnose the patients using the organ contour by detected snake algorithm. However, the image is easily corrupy by noise when we capture the ultrasoundic signal. The thesis proposes a pre-processing system of contour detection combining with Log-Gabor filter、contrast enhancement 、histogram equalization and Canny edge algorithm to improve the quality of ultrasonic image. First, the parameters of Log-Gabor filter ,such as the minimum wavelength and the bandwith central frequency are defined to reduce noise. Then, contrast enhancement and histogram equalization are used to enhance image contrast, which makes the object boundary clearer. We apply Canny edge algorithm to compute the edge map. Finally the system can get the contour of ROI by using GVF-based snake method. The thesis demonstrates the pre-processing system increase the accuracy of the results from GVF-based snake when used in contour detection. The produced contour images important references for are the medical inspection.
Jiang, Meng-Ting, and 江孟霆. "Using the relative relationship between subject and background for image de-noising." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/93222375923725718004.
Full text輔仁大學
資訊工程學系碩士班
102
Nowadays, there are already several image noise reduction methods exist such as linear filter, median filter, wiener filter, gaussian filter, anisotropic filter, total variation image denoising, neighborhood filter. In image noise reduction, PSNR value is one of most important focal points. Unfortunately, all the methods above have one similar problem: stronger noise reduction cause more details loss. This thesis uses the non-local denoising algorithm and the interactive image segmentation by MSRM to solve the problem. The non-local algorithm is used to reduce image noise and the interactive image segmentation by MSRM is used to search the foregrounds and backgrounds of the images. Then, both kinds of regions will receive different levels of noise reductions. The system will search the foregrounds and backgrounds of the image once the user simply outline the area of the foreground image. Lower level of noise reduction is applied on the foreground image to maintain better details and higher level of noise reduction is applied on the background image. The result of this method shows a lesser noise image while maintains good details on the foreground image. Both PSNR values and visual results show good results.
SNEKHA. "GENETIC ALGORITHM BASED ECG SIGNAL DE-NOISING USING EEMD AND FUZZY THRESHOLDING." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15346.
Full textLu, I.-Chia, and 呂宜家. "Exploiting wavelet de-noising in the temporal sequencesof features for robust speech recognition." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/03773923951424999965.
Full text國立暨南國際大學
電機工程學系
99
In this thesis, we propose to apply the wavelet de-noising (WD) techniques in temporal-domain feature sequences for enhancing the noise robustness in order to improve the accuracy of noisy speech recognition. In the proposed method, the temporal domain feature sequence is first processed by some specific statistic normalization scheme, such as mean and variance normalization (MVN) and cepstral gain normalization(CGN), and then dealt with the wavelet de-noising algorithm. With this process, we find that the wavelet de-noising procedure can effectively reduce the middle and high modulation frequency distortion remaining in the statistics-normalized speech features. On the Aurora-2 digit database and task, experimental results show that the above process can significantly improve the accuracy of speech recognition under noise environments. The pairing of WD and CMVN/CGN provides about 20% relative error reduction associated with the MFCC baseline, outperforms the individual CMVN/CGN, and makes the overall recognition rate beyond 90%.
Tsai, Yi-Cheng, and 蔡一誠. "Application of Wavelet De-noising Techniques To Mean ScattererSpacing Estimation For Liver Tissue Characterization." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/95250299057267282139.
Full text國立臺灣大學
電機工程學研究所
94
Liver cirrhosis is a very frequent seen disease in Taiwan. Traditionally, doctors use ultrasound in detecting the liver-related diseases. However, the ratio of false diagnosis has increased due to the difference in human structures and the subjectivity of doctors. Therefore, in our research, we use wavelet transformation to calculate the mean square scattering distance of the ultrasound signals obtained from the ultrasound machine. Because of the great complexity of various tissues, there are many noises and attenuations of the signals. Therefore, we use a noise detecting method, and find a set of threshold to decrease the number of noise. Herein, we use both simulation and real signals to observe the efficiency of denoising and obtain signals with fewer noises. This study will help doctors in diagnosing liver-related diseases and decrease the man-made false diagnosis.
Teng, You-Yang, and 滕有揚. "The Research of Digital Signal Processing Chip Set Applied on Acoustic Signal De-noising." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/q7fn2b.
Full text中原大學
資訊工程研究所
92
In the process of transmitting various acoustic signals , because of being affected by the environment and all kinds of noises that permeate through the propagation channel, it is necessary to have an appropriate signal processing procedure to identify the signals which energy is already decreased by long distance transmission and environment interruption. This research is based in a wavelet-based method by choosing threshold value for de-noising. The procedure is divided into three stages: (1) Wavelet transform of the acoustic signals (2) Thresholding of wavelet signals (3) Inverse wavelet transform to reconstruct modified signals. The most important part is second stage, which uses different threshold-selecting rule to compare the performace for the recognition of the acoustic signals. The developed system is based on TI TMS320C6711 DSK. Since it reveal high performace for digital signal processing,it can reduce the training and recognizing time for the acoustic signal recognization .
Chiao, Yu-Hua, and 焦郁華. "Mixed PDE Based Methods with Adaptive Block Truncation Coding for Image De-noising and Compression." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/78626738415621210245.
Full text國立中興大學
應用數學系所
99
In this thesis, we propose an adaptive block truncation coding method (ABTC) for image compression. For achieving a better image quality, we propose a novel algorithm by a mixture of a upwind finite difference for solving a time dependent convection diffusion equation and the ABTC algorithm to remove the image noise. The numerical results show that our proposed methods are effectively remove the noise, and preserves the edge information well during the image compression process.
Weng, Mu-Shen, and 翁睦盛. "A Study and Comparison on De-noising of Power Quality Transient Signal with Wavelet Transform." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/95221913866722380682.
Full text中原大學
電機工程研究所
90
With the rapid developments of the high-tech industries as well as much more usages of the precise production equipments and test instruments, the far higher power quality (PQ) is demanded nowadays. However, the primary work of improving the power quality has to widely collect the power signals through the PQ monitoring instruments. Based the analysis of the PQ data collected, the causes and the problems of the PQ events can be inferred for the references of the PQ improvement. In the process of monitoring power signals, the PQ related signals are recorded via the A/D converter, digital fault recorder, and waveform data transmission and quantification. Noises always exist in the process and contaminate the PQ signals collected. The noise-contaminated signals often result in false alarms of the PQ monitor, especially for the transient events. To enhance the accuracy of PQ transient monitor for the transient event detection, there must be a high-efficiency de-noising scheme for eliminating the influences of the noises riding on the signals. The accuracy of the PQ transient monitor can, therefore, be enhanced. In processing the PQ transient signals, the traditional Fourier Transform (FT) extensively used for the observation of high-frequency transient signal, however, cannot determine precisely the time occurring points of the disturbance events. As a result, the FT is actually insufficient to detect the time occurring points of the transient events from the database for the PQ transient signals. In contrast, with the capabilities of multi-resolution and the characteristics of varying time-frequency windows on both the time and the frequency domains, the Wavelet Transform (WT) can indicate precisely the occurring points of the events, when the WT is applied to the high-frequency analysis using higher resolution in time domain. The WT is, therefore, extensively employed for the detection of transient signals in power systems. However, due to the existence of the noises as mentioned above, the accuracy of the WT in detection of transient signals is usually reduced greatly. On the other side, to reduce the influences of the noises riding on the signals, the WT-based de-noising approaches are also widely used. While eliminating the noises on the power signals, a threshold is given to prune the noises in the WT-based de-noising approaches. Nevertheless, the setting of a threshold heavily relies on experiences and field circumstances. As a consequence, the de-noising work appears to be both time- and effort-consuming. To solve the problems of threshold value determination, three de-noising algorithms, including adaptive de-noising, hypothetical testing de-noising, and space correlating de-noising methods, are proposed in this thesis for automatically determining the thresholds in accordance with the background noises. Through the de-noising methods provided in the thesis for the PQ transient signal monitoring, the abilities of the WT in detecting and localizing the disturbances can hence be restored. To evaluate and compare the feasibilities of the three WT based de-noising approaches for the PQ transient signals, the simulated data obtained from the MATLAB and Electro-Magnetic Transient Program (EMTP) programs as well as the filed data are used to test the three approaches. The testing results shows that the three de-noising approaches can overcome the influences of the noises successfully as expected. The occurring time points of the transient events can, therefore, accurately be detected and localized by the WT based approaches. The comparisons also reveal that if the hardware implementations are needed for the on-line de-noising applications, the third de-noising algorithm based on the space-correlating technologies is recommended for its simpler computing steps.
Wang, G., Simon J. Shepherd, Clive B. Beggs, N. Rao, and Y. Zhang. "The use of kurtosis de-noising for EEG analysis of patients suffering from Alzheimer's disease." 2015. http://hdl.handle.net/10454/9242.
Full textThe use of electroencephalograms (EEGs) to diagnose and analyses Alzheimer's disease (AD) has received much attention in recent years. The sample entropy (SE) has been widely applied to the diagnosis of AD. In our study, nine EEGs from 21 scalp electrodes in 3 AD patients and 9 EEGs from 3 age-matched controls are recorded. The calculations show that the kurtoses of the AD patients' EEG are positive and much higher than that of the controls. This finding encourages us to introduce a kurtosis-based de-noising method. The 21-electrode EEG is first decomposed using independent component analysis (ICA), and second sort them using their kurtoses in ascending order. Finally, the subspace of EEG signal using back projection of only the last five components is reconstructed. SE will be calculated after the above de-noising preprocess. The classifications show that this method can significantly improve the accuracy of SE-based diagnosis. The kurtosis analysis of EEG may contribute to increasing the understanding of brain dysfunction in AD in a statistical way.
Huang, Min-yu, and 黃敏煜. "A Discrete Wavelet Transform (DWT) based De-noising Circuit Design with its Applications to Medical Signal Processing." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/57371780328343232025.
Full text長庚大學
電子工程研究所
93
Wavelet Transform is a multiresolution analysis that decomposes an original signal to multi-octave based functions, and we can analysis the original signal using these functions. It provides a novel and effective tool for many applications in signal processing area. Also, it has the advantage over the traditional Fourier Transform with respect to time-frequency analysis because of its characteristic of multiresolution. Therefore, it has been widely applied into many aspects of signal/image processing-related researches. In this thesis, we proposed and realized a Discrete Wavelet Transform (DWT) based de-noising circuit architecture with the applications into the noise reduction for medical signals. Here, our design was based on a three octave-level with Daubechies 4 filters. The circuit consists of three parts: DWT, thresholding, and IDWT. Software and hardware simulations were performed first. Furthermore, we implemented the de-noising circuit by downloading the Verilog codes to an FPGA to observe its practical processing ability. As a result, by inputting a noisy Electrocardiogram (ECG) into the de-noising circuit we found that the circuit satisfied the requirement of real-time processing, and also achieved pretty good performance for noise reduction.
Ting, Tzu-hsuan, and 丁子軒. "Combining Deep De-noising Auto-encoder and Recurrent Neural Network in End-to-end Speech Recognition for Noise Robustness." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/nrcpz2.
Full text國立中山大學
資訊工程學系研究所
106
In this paper, we implement an end-to-end noise-robustness speech recognition system on Aurora 2.0 dataset through combining deep de-noising auto-encoders and recurrent neural networks. At front-end we use fully connected auto-encoder (FCDAE) to deal with noisy data. We propose two efficient methods to improve denoising performance when training FCDAE. The first method is to plus different weights for the loss value of distinct signal-to-noise ratio data. The second method is change the way of use on training data. Finally, we combine the two methods and get the best experimental results. For the back-end speech recognition, we use an end-to-end system based on bidirectional recurrent neural network which is trained via connectionist temporal classification criterion, and compared to a baseline backend based on hidden Markov models and Gaussian mixture models (HMM-GMM). With integrating FCDAE and recognition models, we get 94.20% word accuracy rate in clean condition, and 94.24% word accuracy rate in multi condition. The two results have a relative improvement rate of 65% and 20% compared with the baseline experiments, of which 94.20% is obtained by the FCDAE and HMM-GMM, and 94.24% is obtained by combining the FCDAE and bidirectional recurrent neural network.
Parravicini, Giovanni. "A factor augmented vector autoregressive model and a stacked de-noising auto-encoders forecast combination to predict the price of oil." Master's thesis, 2019. http://hdl.handle.net/10362/73196.
Full textLiu, Chia-Chou, and 劉佳洲. "On the Application of the De-noising Method of Stationary Wavelet Coefficients Threshold to Filter Out Noise in Digital Hearing Aids." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/43887541206717616847.
Full text臺灣大學
工程科學及海洋工程學研究所
98
For a long time, improving the hearing of the hearing-impaired has been what researchers and medical professionals been struggling to achieve. Because there are currently over 200 million deaf or hard of hearing people worldwide, researchers and medical professionals realize the importance of this goal. Fortunately, the gift of technology, from early analog hearing aids to the mainstream of digital hearing aids, has brought about various kinds of flourishing digital signal processing technology. The function of current hearing aids is no longer restricted to just simple voice amplification, which allows the hearing-impaired to hear directly, but can also satisfy the different needs of different users with different sound signal processing. In fact, the development of hearing aids still has an opportunity for improvement. In this paper, the white noise is added to the clean voice signal, becoming a voice signal that contains noise. First, the discrete wavelet transform is used to cut the voice bandwidth into nine different sub-band bandwidths. Second, the discrete wavelet stationary transform is used to cut the voice bandwidth into nine different sub-band bandwidths. Third, the wavelet packet transform is used to cut the voice bandwidth into eight identical bandwidths. The wavelet de-noising method is used to filter out high-frequency noise. After the voice signal has been de-noised, it makes up four different types of hearing loss, including 40dB uniform hearing loss, mild low-frequency hearing loss, moderate high-frequency hearing loss, and severe high-frequency hearing loss. Finally, the saturated volume limits the final output of the energy of speech to a fixed size. This thesis is to simulate voice signal processing by the wavelet transform. The process of verification can effectively filter out white noise, and compensate the four different types of hearing loss to achieve the basic functions of digital hearing aids.
Chien, Hsin-Kai, and 錢信凱. "A Study of Images Recognition and De-noising with Varying Emissivity and Temperature Levels by Using the Middle Wave Infrared Camera." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/71882083749283351026.
Full text國立高雄應用科技大學
模具工程系
98
In this study, Middle Wave Infrared (MWIR) is used to acquire infrared image of the target object with varying background temperature and emissivity to discuss the displayed infrared images. Then, image processing methods are used to make recognition and de-noising. When temperature between the target object and background is near, the quality of infrared images will be influenced obviously to make recognition hard and noise of the ambient images will be increased. Moreover, the images with larger emissivity difference are clearer than smaller ones. In order to expand MWIR images in practical measurement, it is important to acquire reliable images and use image processing methods. The purpose of the experiment is to obtain infrared images of three target objects based on eight different backgrounds emissivity (stainless steel with gray paint εb=0.7, cast iron εb=0.92, white paper εb=0.93, metal with gray paint εb=0.94, wood εb=0.95, blue cloth εb=0.96, black paper εb=0.98, stainless steel with black paint εb=0.99) and five different temperatures (31°C、33°C、35°C、37°C、39°C).In the experiment environment, experimental box is used to reduce environmental error, and the average environmental temperature is 25℃. Moreover, digital thermocouple and middle wave infrared camera are used to find out the emissivity of the background and target object respectively, and plate type temperature keeper is used to heat to different temperatures. After selecting infrared images of three target objects, image processing methods are used to enhance blurred images, edge recognition and noise remove. This study will explain the meaning of image processing methods for infrared images. At present, it can acquire target objects contours and remove noise which influenced recognition after transferring blurred images to gray level images.
Pandey, Santosh Kumar. "Signal Processing Tools To Enhance Interpretation Of Impulse Tests On Power Transformers." Thesis, 1997. https://etd.iisc.ac.in/handle/2005/1821.
Full textPandey, Santosh Kumar. "Signal Processing Tools To Enhance Interpretation Of Impulse Tests On Power Transformers." Thesis, 1997. http://etd.iisc.ernet.in/handle/2005/1821.
Full textSalgado, Patarroyo Ivan Camilo. "Spatially Regularized Spherical Reconstruction: A Cross-Domain Filtering Approach for HARDI Signals." Thesis, 2013. http://hdl.handle.net/10012/7847.
Full text