To see the other types of publications on this topic, follow the link: De-noising.

Dissertations / Theses on the topic 'De-noising'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 39 dissertations / theses for your research on the topic 'De-noising.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schwartz, David, and David Schwartz. "Navigational Neural Coding and De-noising." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/625322.

Full text
Abstract:
The work discussed in this thesis is the product of investigation on information and coding theoretic properties of colluding populations of navigationally relevant mammalian neurons. For brevity and completeness, that work is presented chronologically in the order in which it was investigated. This thesis details coding theoretic properties of (and develop a model for communication between) colluding populations of spatially responsive neurons in the hippocampus (HC) and medial entorhinal cortex (MEC) through a hypothetical layer of interneurons (each of which posesses exclusively excitatory or inhibitory synapses). This work presents analysis of the changes in network structure induced by an anti-Hebbian learning process and translate these analyses into biologically testable hypotheses. Further, it is demonstrated that for appropriately parameterized codes (i.e. populations of grid and place cells in MEC and HC, respectively), this network is able to learn the code and correct for errors introduced by neural noise, potentially explaining the results of a correlational study: Place cell variability sharply decreases at a time that coincides with the maturation of the grid cell network in developing mice. Further, this work predicts that disruption of the grid cell network (e.g. via optogenetic inactivation and lesioning) should increase the variability of place cell firing, and impair decoding from these place cells' activities. Continuing down this avenue, we consider how the inclusion of a population of the somewhat controversial time cells (purportedly residing in HC and MEC) impacts de-noising network structure, coding properties of the population of populations of all three classes of navigatory neuron, and denoisability. These results are translated to testable neurobiological predictions. Additionally, to ensure realistic stimulus statistics, locations and times are taken from real rat paths recorded from navigating rats in the Computational and Experimental Neuroscience Laboratory at the University of Arizona. Interestingly, while time cells exhibit some of the coding and information theoretic trends described in chapter 4, in certain cases, they admit surprising connectivity trends. Most surprisingly, after including time cells in this framework it was discovered that some classes of neural noise appear to improve decoding accuracy over the entire path while simultaneously impairing accuracy of decoding position and time independently.
APA, Harvard, Vancouver, ISO, and other styles
2

Fike, Amanda(Amanda J. ). "De-noising and de-blurring of images using deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123266.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (page 12).
Deep Neural Networks (DNNs) [1] are often used for image reconstruction, but perform better reconstructing the low frequencies of the image than the high frequencies. This is especially the case when using noisy images. In this paper, we test using a Learning Synthesis Deep Neural Network (LS-DNN) [2] in combination with BM3D [3], an off the shelf de-noising tool, to generate images, attempting to decouple the de-noising and de-blurring steps to reconstruct noisy, blurry images. Overall, the LS-DNN performed similarly to the DNN trained only with respect to the ground truth images, and decoupling the de-noising and de-blurring steps underperformed compared to the results of images de-blurred and de-noised simultaneously with a DNN.
by Amanda Fike.
S.B.
S.B. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Guangyi. "Applications of wavelet transforms in pattern recognition and de-noising." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0006/MQ43552.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khorbotly, Sami. "DESIGN AND IMPLEMENTATION OF LOW COST DE-NOISING SYSTEMS FOR REAL-TIME CONTROL APPLICATIONS." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1180976720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sawant, Rupesh Prakash. "Bio-Particle Counting and Sizing Using Micro-Machined Multichannel Coulter Counter with Wavelet Based De-Noising." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1196456801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shafri, Helmi Zulhaidi Mohd. "An assessment of the potential of wavelet-based de-noising in the analysis of remotely sensed data." Thesis, University of Nottingham, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vrba, Filip. "Odstranění hluku magnetické rezonance v nahrávkách řeči." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442573.

Full text
Abstract:
This thesis deals with the removal of noise in speech recordings that have been recorded in an MRI environment. For this purpose, the Nvidia RTX Voice technology, the VST plug-in module Noisereduce and a self-designed method of subtractive de-noising of recordings are used. A program with a simple graphical interface in Python is implemented within the work to retrieve the recordings and then de-noise them using the proposed methods. The work includes measurements in a magnetic resonance environment with two microphones. The quality of the processed recordings is tested within the program using the STOI (Short-Time Objective Intelligibility Measure) method as well as the subjective analysis method within listening tests.
APA, Harvard, Vancouver, ISO, and other styles
8

Venter, Nielen Christoff. "The effects of empirical mode decomposition based on de-noising techniques in improving detection of directly stimulated skeletal muscle response." Thesis, University of Cape Town, 2013. http://hdl.handle.net/11427/3213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Palaniappan, Prashanth. "De-noising of Real-time Dynamic Magnetic Resonance Images by the Combined Application of Karhunen-Loeve Transform (KLT) and Wavelet Filtering." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357269157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carter, Drew Davis. "Characterisation of cardiac signals using level crossing representations." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/130760/1/Drew_Carter_Thesis.pdf.

Full text
Abstract:
This study examines a type of event-based sampling known as Level Crossing - its behaviour when applied to noisy signals, and an application to cardiac arrhythmia detection. Using a probabilistic approach, it presents a mathematical description of events sampled from noisy signals, and uses the model to estimate characteristics of the underlying clean signal. It evaluates the use of segments of polynomials, calculated from the Level Crossing samples of real cardiac signals, as features for machine learning algorithms to identify various types of arrhythmia.
APA, Harvard, Vancouver, ISO, and other styles
11

Frigo, Guglielmo. "Compressive Sensing Applications in Measurement: Theoretical issues, algorithm characterization and implementation." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424133.

Full text
Abstract:
At its core, signal acquisition is concerned with efficient algorithms and protocols capable to capture and encode the signal information content. For over five decades, the indisputable theoretical benchmark has been represented by the wellknown Shannon’s sampling theorem, and the corresponding notion of information has been indissolubly related to signal spectral bandwidth. The contemporary society is founded on almost instantaneous exchange of information, which is mainly conveyed in a digital format. Accordingly, modern communication devices are expected to cope with huge amounts of data, in a typical sequence of steps which comprise acquisition, processing and storage. Despite the continual technological progress, the conventional acquisition protocol has come under mounting pressure and requires a computational effort not related to the actual signal information content. In recent years, a novel sensing paradigm, also known as Compressive Sensing, briefly CS, is quickly spreading among several branches of Information Theory. It relies on two main principles: signal sparsity and incoherent sampling, and employs them to acquire the signal directly in a condensed form. The sampling rate is related to signal information rate, rather than to signal spectral bandwidth. Given a sparse signal, its information content can be recovered even fromwhat could appear to be an incomplete set of measurements, at the expense of a greater computational effort at reconstruction stage. My Ph.D. thesis builds on the field of Compressive Sensing and illustrates how sparsity and incoherence properties can be exploited to design efficient sensing strategies, or to intimately understand the sources of uncertainty that affect measurements. The research activity has dealtwith both theoretical and practical issues, inferred frommeasurement application contexts, ranging fromradio frequency communications to synchrophasor estimation and neurological activity investigation. The thesis is organised in four chapters whose key contributions include: • definition of a general mathematical model for sparse signal acquisition systems, with particular focus on sparsity and incoherence implications; • characterization of the main algorithmic families for recovering sparse signals from reduced set of measurements, with particular focus on the impact of additive noise; • implementation and experimental validation of a CS-based algorithmfor providing accurate preliminary information and suitably preprocessed data for a vector signal analyser or a cognitive radio application; • design and characterization of a CS-based super-resolution technique for spectral analysis in the discrete Fourier transform(DFT) domain; • definition of an overcomplete dictionary which explicitly account for spectral leakage effect; • insight into the so-called off-the-grid estimation approach, by properly combining CS-based super-resolution and DFT coefficients polar interpolation; • exploration and analysis of sparsity implications in quasi-stationary operative conditions, emphasizing the importance of time-varying sparse signal models; • definition of an enhanced spectral content model for spectral analysis applications in dynamic conditions by means of Taylor-Fourier transform (TFT) approaches.
Nell'ambito della Scienza dell'Informazione, il problema dell'acquisizione dei segnali è intimamente connesso alla progettazione e implementazione di efficienti algoritmi e procedure capaci di estrapolare e codificare il contenuto informativo contenuto nel segnale. Per oltre cinquant'anni, il riferimento in quest'ambito è stato rappresentato dal teorema di campionamento di Shannon e la corrispondente definizione di informazione in termini di estensione spettrale del segnale. La società contemporanea si fonda su di un pressoché incessante ed istantaneo scambio di informazioni, che vengono veicolate per la maggior parte in formato digitale. In siffatto contesto, i moderni dispositivi di comunicazione sono chiamati a gestire notevoli moli di dati, seguendo un consueto protocollo operativo che prevede acquisizione, elaborazione e memorizzazione. Nonostante l'incessante sviluppo tecnologico, il protocollo di acquisizione convenzionale è sottoposto a sempre crescente pressione e richiede un carico computazionale non proporzionale al reale contenuto informativo del segnale. Recentemente, un nuovo paradigma di acquisizione, noto con il nome di Campionamento Compresso, va diffondendosi tra i diversi settori della Scienza dell'Informazione. Questa innovativa teoria di campionamento si fonda su due principi fondamentali: sparsità del segnale e incoerenza del campionamento, e li sfrutta per acquisire il segnale direttamente in una versione condensata, compressa appunto. La frequenza di campionamento è collegata al tasso di aggiornamento dell'informazione, piuttosto che all'effettiva estensione spettrale del segnale. Dato un segnale sparso, il suo contenuto informativo può essere ricostruito a partire da quello che potrebbe sembrare un insieme incompleto di misure, al costo di un maggiore carico computazionale della fase di ricostruzione. La mia tesi di dottorato si basa sulla teoria del Campionamento Compresso e illustra come i concetti di sparsità e incoerenza possano essere sfruttati per sviluppare efficienti protocolli di campionamento e per comprendere appieno le sorgenti di incertezza che gravano sulle misure. L'attività di ricerca ha riguardato aspetti sia teorici sia implementativi, traendo spunto da contesti applicativi di misura che spaziano dalle comunicazioni a radio frequenza alla stima dei sincrofasori e all'indagine dell'attività neurologica. La tesi è organizzata in quattro capitoli ove i contributi più significativi includono: • la definizione di un modello unificato per i sistemi di acquisizione di segnali sparsi, con particolare attenzione alle implicazioni dovute alle assunzioni di sparsità e incoerenza; • caratterizzazione delle principali famiglie algoritmiche per la ricostruzione di segnali sparsi, con particolare attenzione all'impatto del rumore additivo sull'accuratezza delle stime; • implementazione e validazione sperimentale di un algoritmo di campionamento compresso capace di fornire accurate informazioni preliminari e opportuni dati pre-elaborati per un contesto applicativo di analizzatore vettoriale o di radio cognitiva; • sviluppo e caratterizzazione fi un algoritmo di campionamento compresso per super-risoluzione nell'ambito dell'analisi spettrale nel dominio della trasformata discreta di Fourier (DFT); • definizione di un dizionario sovra-completo che renda conto esplicitamente dell'effetto di leakage spettrale; • indagine dei cosiddetti approcci di stima off-the-grid, mediante un'opportuna combinazione di super-risoluzione mediante campionamento compresso e interpolazione polare dei coefficienti DFT; • analisi del concetto di sparsità entro il contesto dei segnali quasi-stazionari, sottolineando l'importanza dei modelli di segnali a sparsità tempo-variante; • definizione di un modello del contenuto spettrale del segnale attraverso campionamento compresso da utilizzarsi in applicazioni di analisi spettrale in condizioni dinamiche mediante trasformata di Taylor-Fourier.
APA, Harvard, Vancouver, ISO, and other styles
12

Zbranek, Lukáš. "Moderní metody zvýrazňování statických MR obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218181.

Full text
Abstract:
The aim of this masters thesis is design and implement an appropriate method for highlighting MR images and the identification of rough edges to provide for division of controlled areas. To this purpose is possible to use the Wavelet analysis. For the simulation environment I using MATLAB entviroment, where introduce the comparison for different types of de-noising and too for different mother wavelets. These methods will be implemented on various MR images of termoromandibular joint.
APA, Harvard, Vancouver, ISO, and other styles
13

Cronvall, Per. "Vektorkvantisering för kodning och brusreducering." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2377.

Full text
Abstract:

This thesis explores the possibilities of avoiding the issues generally associated with compression of noisy imagery, through the usage of vector quantization. By utilizing the learning aspects of vector quantization, image processing operations such as noise reduction could be implemented in a straightforward way. Several techniques are presented and evaluated. A direct comparison shows that for noisy imagery, vector quantization, in spite of it's simplicity, has clear advantages over MPEG-4 encoding.

APA, Harvard, Vancouver, ISO, and other styles
14

Gómez-Navarro, Laura. "Techniques de débruitage d'image pour améliorer l'observabilité de la fine échelle océanique par SWOT." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALU024.

Full text
Abstract:
Les observations de la hauteur de la surface de la mer (SSH) décrivant des échelles entre 10 et 100 km sont cruciales pour mieux comprendre les transferts d'énergie et pour quantifier les échanges verticaux de chaleur et de traceurs biogéochimiques. La mission Surface Water Ocean Topography (SWOT) est un nouveau satellite altimétrique à large fauchée dont le lancement est prévu en 2022. SWOT fournira des informations sur la SSH à une résolution kilométrique, mais des incertitudes dues à diverses sources d'erreurs mettront à l'épreuve notre capacité à extraire le signal physique des structures inférieures à quelques dizaines de kilomètres. Le filtrage du bruit et des erreurs SWOT est une étape clé vers une interprétation optimale des données.L'objectif de cette étude est d'explorer des techniques de débruitage d'image afin d'évaluer les capacités des futures données SWOT à résoudre les fines échelles océaniques. Les données pseudo-SWOT sont générées avec le simulateur SWOT pour l'océanographie, qui utilise comme données d'entrée les sorties SSH des modèles de circulation générale océanique (OGCMs) à haute résolution. Plusieurs techniques de débruitage sont testées, afin de trouver celle qui rend le plus précisément les champs de SSH et de ses dérivées tout en préservant l'amplitude et la forme des structures océaniques présentes. Les techniques sont évaluées sur la base de la racine carrée de l'erreur quadratique moyenne, des spectres et d'autres diagnostiques.Au Chapitre 3, les données pseudo-SWOT pour la phase scientifique sont analysées pour évaluer les capacités de résolution de la méso et la sousmésoéchelle en Méditerranée occidentale. Une technique de débruitage par diffusion laplacienne est mise en œuvre permettant de récupérer la SSH, la vitesse géostrophique et la vorticité relative jusqu'à 40 - 60 km. Cette première étape a permis d'observer correctement la mésoéchelle, mais des améliorations sont possibles à la sousmesoéchelle, notamment pour mieux préserver l'intensité du signal SSH.Au Chapitre 4, une autre technique de débruitage est explorée dans la même région pour la phase d'échantillonnage rapide du satellite. Elle vise à retrouver adéquatement des dérivées de SSH, en récupérant leur structure et en préservant leur ampleur. Une méthode variationnelle est mise en œuvre qui peut pénaliser les dérivées de la SSH de premier, deuxième, troisième ordre ou une combinaison de ceux-ci. Le meilleur paramétrage est basé sur une pénalisation de second ordre, et nous avons trouvé les paramètres optimaux de cette configuration. Grâce à cette technique, les longueurs d'onde résolues par SWOT dans cette région sont réduites d'un facteur 2, tout en préservant l'ampleur des champs.Au Chapitre 5, nous étudions l'échelle spatiale la plus fine que SWOT pourrait résoudre après avoir débruité dans plusieurs régions, saisons et en utilisant différents OGCMs. Notre étude se concentre sur différentes régions et afin de documenter la variété des régimes que SWOT échantillonnera. L'algorithme de débruitage fonctionne bien même en présence de mouvements rapides non équilibrés intenses, et permet de réduire systématiquement la plus petite longueur d'onde résolue. Algorithmes de débruitage avancés permettent également de reconstruire de manière fiable les gradients SSH et les dérivées de second ordre. Nos résultats montrent également qu'une incertitude importante subsiste quant à l'échelle la plus fine résolue par SWOT dans une région et saison données en raison de la grande dispersion du niveau de variance estimé par nos simulations.La technique de débruitage développée, mise en œuvre et testée dans cette thèse doctorale permet de récupérer, dans certains cas, des échelles spatiales SWOT jusqu'à 15 km. Cette méthode est une contribution très utile pour atteindre les objectifs de la mission SWOT. Les résultats trouvé aideront à mieux comprendre la dynamique et les structures océaniques et leur rôle dans le système climatique
Sea Surface Height (SSH) observations describing scales in the range 10 - 100 km are crucial to better understand energy transfers across scales in the open ocean and to quantify vertical exchanges of heat and biogeochemical tracers. The Surface Water Ocean Topography (SWOT) mission is a new wide-swath altimetric satellite which is planned to be launched in 2022. SWOT will provide information on SSH at a kilometric resolution, but uncertainties due to various sources of errors will challenge our capacity to extract the physical signal of structures below a few tens of kilometers. Filtering SWOT noise and errors is a key step towards an optimal interpretation of the data.The aim of this study is to explore image de-noising techniques to assess the capabilities of the future SWOT data to resolve the oceanic fine scales. Pseudo-SWOT data are generated with the SWOT simulator for Ocean Science, which uses as input the SSH outputs from high-resolution Ocean General Circulation Models (OGCMs). Several de-noising techniques are tested, to find the one that renders the most accurate SSH and its derivatives fields while preserving the magnitude and shape of the oceanic features present. The techniques are evaluated based on the root mean square error, spectra and other diagnostics.In Chapter 3, the pseudo-SWOT data for the Science phase is analyzed to assess the capabilities of SWOT to resolve the meso- and submesoscale in the western Mediterranean. A Laplacian diffusion de-noising technique is implemented allowing to recover SSH, geostrophic velocity and relative vorticity down to 40 - 60 km. This first step allowed to adequately observe the mesoscale, but space is left for improvement at the submesoscale, specially in better preserving the intensity of the SSH signal.In Chapter 4, another de-noising technique is explored and implemented in the same region for the satellite's fast-sampling phase. This technique is motivated by recent advances in data assimilation techniques to remove spatially correlated errors based on SSH and its derivatives. It aims at retrieving accurate SSH derivatives, by recovering their structure and preserving their magnitude. A variational method is implemented which can penalize the SSH derivatives of first, second, third order or a combination of them. We find that the best parameterization is based on a second order penalization, and find the optimal parameters of this setup. Thanks to this technique the wavelengths resolved by SWOT in this region are reduced by a factor of 2, whilst preserving the magnitude of the SSH fields and its derivatives.In Chapter 5, we investigate the finest spatial scale that SWOT could resolve after de-noising in several regions, seasons and using different OGCMs. Our study focuses on different regions and seasons in order to document the variety of regimes that SWOT will sample. The de-noising algorithm performs well even in the presence of intense unbalanced motions, and it systematically reduces the smallest resolvable wavelength. Advanced de-noising algorithms also allow to reliably reconstruct SSH gradients (related to geostrophic velocities) and second order derivatives (related to geostrophic vorticity). Our results also show that a significant uncertainty remains about SWOT's finest resolved scale in a given region and season because of the large spread in the level of variance predicted among our high-resolution ocean model simulations.The de-noising technique developed, implemented and tested in this doctoral thesis allows to recover, in some cases, SWOT spatial scales as low as 15 km. This method is a very useful contribution to achieving the objectives of the SWOT mission. The results found will help better understand the ocean's dynamics and oceanic features and their role in the climate system
APA, Harvard, Vancouver, ISO, and other styles
15

Al, Rababa'A Abdel Razzaq. "Uncovering hidden information and relations in time series data with wavelet analysis : three case studies in finance." Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/25961.

Full text
Abstract:
This thesis aims to provide new insights into the importance of decomposing aggregate time series data using the Maximum Overlap Discrete Wavelet Transform. In particular, the analysis throughout this thesis involves decomposing aggregate financial time series data at hand into approximation (low-frequency) and detail (high-frequency) components. Following this, information and hidden relations can be extracted for different investment horizons, as matched with the detail components. The first study examines the ability of different GARCH models to forecast stock return volatility in eight international stock markets. The results demonstrate that de-noising the returns improves the accuracy of volatility forecasts regardless of the statistical test employed. After de-noising, the asymmetric GARCH approach tends to be preferred, although that result is not universal. Furthermore, wavelet de-noising is found to be more important at the key 99% Value-at-Risk level compared to the 95% level. The second study examines the impact of fourteen macroeconomic news announcements on the stock and bond return dynamic correlation in the U.S. from the day of the announcement up to sixteen days afterwards. Results conducted over the full sample offer very little evidence that macroeconomic news announcements affect the stock-bond return dynamic correlation. However, after controlling for the financial crisis of 2007-2008 several announcements become significant both on the announcement day and afterwards. Furthermore, the study observes that news released early in the day, i.e. before 12 pm, and in the first half of the month, exhibit a slower effect on the dynamic correlation than those released later in the month or later in the day. While several announcements exhibit significance in the 2008 crisis period, only CPI and Housing Starts show significant and consistent effects on the correlation outside the 2001, 2008 and 2011 crises periods. The final study investigates whether recent returns and the time-scaled return can predict the subsequent trading in ten stock markets. The study finds little evidence that recent returns do predict the subsequent trading, though this predictability is observed more over the long-run horizon. The study also finds a statistical relation between trading and return over the long-time investment horizons of [8-16] and [16-32] day periods. Yet, this relation is mostly a negative one, only being positive for developing countries. It also tends to be economically stronger during bull-periods.
APA, Harvard, Vancouver, ISO, and other styles
16

Romanenko, Ilya. "Novel image processing algorithms and methods for improving their robustness and operational performance." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16340.

Full text
Abstract:
Image processing algorithms have developed rapidly in recent years. Imaging functions are becoming more common in electronic devices, demanding better image quality, and more robust image capture in challenging conditions. Increasingly more complicated algorithms are being developed in order to achieve better signal to noise characteristics, more accurate colours, and wider dynamic range, in order to approach the human visual system performance levels.
APA, Harvard, Vancouver, ISO, and other styles
17

Khalil, Toni. "Processus d’évaluation de la qualité de l’imagerie médicale et outils d’aide à la décision basée sur la connaissance." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0351.

Full text
Abstract:
Les grands progrès que l'imagerie médicale a offerts dans le domaine médical sur le plan diagnostique (Radiologie conventionnelle, Tomodensitométrie, Résonnance magnétique nucléaire et la Radiologie interventionnelle) ont poussé la médecine à passer par ce domaine comme premier choix. Avec un nombre toujours croissant d'images diagnostiques réalisées chaque année, ainsi que les recommandations des organisations internationales exigeant une irradiation à faible dose entraînant un énorme bruit pouvant fausser le diagnostic, les méthodes de dé-bruitage par Intelligence Artificielle (IA) offrent la possibilité de répondre à la demande croissante. Dans cette thèse, on quantifie l'effet du dé-bruitage basé sur l'IA sur les paramètres textuels de la radiographie en relation avec un réseau neurone convolutif. L'étude était basée sur le principe de caractérisation du bruit radiographique issu d'une radiographie d'un fantôme d'eau, puis générer ce bruit dans une radiographie destine à produire des images artificiellement bruitées, et, ceci afin de pouvoir alimenter un réseau neurone par des milliers d'images pour assurer sa phase d'apprentissage. Après la phase d'apprentissage, la phase d'essai et l'inférence, des radiographies thoraciques humaines ont été extraites des archives pour valider le dé-bruitage sur des radiographies humaines en RGB et en « greyscale ». L'étude a été réalisée à l'aide d'un fantôme d'eau pour des raisons éthiques afin d'éviter l'irradiation des personnes, d'éviter les mouvements volontaires et involontaires des patients, et de fournir une étude à partir d'une matière homogène (eau) qui constitue la majeure partie du corps humain. Cette étude est réalisée d'une part sur 17 radiographies d'un fantôme d'eau avec différentes doses d'exposition pour étudier la distribution du bruit sur différentes valeurs d'échelle de gris et, d'autre part sur 25 radiographies réparties en 5 groupes de 5 images, chaque groupe est réalisé avec la même dose d'exposition sans et avec des obstacles à côté pour étudier l'effet de gain du détecteur plat choisi comme moyen de prétraitement. La distribution de bruit a été détectée sur deux niveaux de gris, soit 160 et 180 respectivement et, a montré un niveau de bruit plus important sur le niveau 160 où l'absorption du rayon est plus importante et, par de ce fait, l'effet quantique est plus important. Des diagrammes de dispersion de bruit sur ces deux niveaux ont été présentés. D'autre part, la présence d'obstacles dans un même cliché a montré une absorption directement proportionnelle avec le nombre d'obstacles à côté du fantôme d'eau, ce qui a déclenché un facteur de gain du capteur qui, dans son rôle génère un bruit de tracé non linéaire. Les caractéristiques de texture des images dé-bruitées à travers l'IA par rapport aux radiographies artificiellement bruitées ont été comparées avec un coefficient de rapport de signal sur bruit (PSNR). Les caractéristiques avec des valeurs PSNR augmenté sur les images RGB et sur les images « greyscale » ont été considérées comme concordantes. Un test a été effectué pour comparer les valeurs absolues entre les images IA sans bruit et les images présentant un bruit artificiel. Les résultats du rapport des caractéristiques concordantes étaient de (38,05/30,06) -100 (26,58%) d'amélioration en RGB contre (35,93/22.21) - 100 (61,77%) d'amélioration en « greyscale ». En conclusion, l'application d'un dé-bruitage basé sur l'IA sur les images radiographiques conserve la plupart des informations de texture de l'image. Le dé-bruitage basé sur l'intelligence artificielle dans la radiographie à faible dose est une approche très prometteuse car elle adapte le dé-bruitage, en préservant l'information où elle est nécessaire
The great progress that medical imaging has offered in the medical sector on the diagnostic level (Conventional Radiology, Computed Tomography, Nuclear Magnetic Resonance and Interventional Radiology) has pushed medicine to go through this area as the first choice. With an ever-increasing number of diagnostic images produced each year, as well as the recommendations of international organizations requiring low-dose irradiation resulting in enormous noise that can distort the diagnosis, Artificial Intelligence (AI) de-noising methods offer an opportunity to meet growing demand. In this thesis, we quantify the effect of AI-based de-noising on X-ray textural parameters with respect to a convolutional neural network.The study was based on the principle of characterizing the radiographic noise resulting from an X-ray of a water phantom and, generating this noise in a standard dose radiography aimed at producing artificially noisy images, and this in order to be able to feed a neural network by thousands of images to ensure its learning phase. After the learning phase, the testing phase and the inference, human chest X-rays were extracted from the archive to validate the de-noising on human X-rays in RGB and in “greyscale”. The study was done through a water phantom for ethical reasons in order to avoid irradiating people, avoiding voluntary and involuntary patient movements, and ensuring a study based on a homogeneous material (water) which constitutes the majority of the human body. This study is carried out on the one hand on 17 X-rays of a water phantom with different exposure doses to study the noise distribution on different gray scale values and, on the other hand on 25 X-rays divided into 5 groups of 5 images each taken with the same exposure dose without and with adjacent obstacles to study the gain effect of the flat panel detector chosen as the pre-processing means. The noise distribution was detected on two gray levels, i.e. 160 and 180 respectively, and showed a higher level of noise on the 160 level where the absorption of the X-ray beam is greater and, consequently, the quantum effect is most important. Noise scatter diagrams on these two levels have been shown. On the other hand, the presence of obstacles in the same image showed an absorption directly proportional to the number of obstacles next to the water phantom, which triggered a gain factor of the detector which, in its role produces nonlinear trace noise. Texture characteristics of AI-de-noised images compared to artificially noisy radiographs were compared with a peak signal-to-noise ratio (PSNR) coefficient. Features with increased PSNR values on RGB images and on greyscale images were considered to be consistent. A test to compare absolute values between AI-de-noised and artificially noisy images was performed. The results of the concordant features report were (38.05/30.06) -100 (26.58%) improvement in RGB versus (35.93/22.21) - 100 (61.77%) improvement in ‘greyscale'. In conclusion, applying AI-based de-noising on X-ray images retains most of the texture information of the image. AI-based de-noising in low-dose radiography is a very promising approach because it adapts de-noising, preserving information where it should
APA, Harvard, Vancouver, ISO, and other styles
18

Lin, Lian-Da, and 林良達. "Study of De-noising Techniques-Applied to Image Restoration." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/89439193033598738809.

Full text
Abstract:
碩士
國立海洋大學
電機工程學系
86
A new algorithm incorporated with standard median filtering is proposed to effectively remove impulsive noise in image processing. This computationally efficient approach first classifies input pixels and then performs median filtering process. Simulation results show that the proposed scheme, regardless of high SNR or low SNR, displays superior mean square error (MSE) over standard median filter. Threshold estimation is a critical step in the Waveshrink method which aims to produce a faithful replica of the uncorrupted input signal. Empirical results show, however, that Waveshrink thresholds (eitherMinimax or Universal) are often too large or too small for achieving optimal results. Alternatively, we present an intuitive approach useful for estimating better thresholds that significantly improve the de-noising performance.
APA, Harvard, Vancouver, ISO, and other styles
19

Yu, Chen Kuan, and 陳冠宇. "An Improved Wavelet Thresholding Method for De-Noising Electrocardiogram Signals." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/28576494141875361473.

Full text
Abstract:
碩士
輔仁大學
資訊工程學系
97
Abstract The electrocardiogram (ECG) signal records the electrical activity of the heart and the signal is widely used for diagnosis of heart diseases. However, the ECG signals are easy to be interfered with by the different noises. A de-noising method is often used to filter noise and the produced ECG is then used to help physicians to diagnose cardiovascular disease. In recent years, several de-noising methods based on discrete wavelet transform (DWT) are proposed to deal with the problem of extracting the weak ECG signal in a strong noisy environment. Although the related methods have their strengths, there is room for further studies and improvements. In this paper, we therefore propose an improved wavelet thresholding method for de-noising ECG signals and investigate additive Gaussian noise to various ECG signals from MIT-BIH database. From the experimental results, the proposed approach outperforms the existing thresholding methods in both measures of signal-to-noise ratio (SNR) and Root Mean Square Error (RMSE). Moreover, our approach remains the features of ECG signals and has better visual performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Tsao, Chien-Kung, and 曹鍵滎. "Improcments of Wavelet-Shrinkage for De-noising of Nonstationary Signal." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/00099785220522681752.

Full text
Abstract:
碩士
國立海洋大學
電機工程學系
88
Denosing processing of speech random signals is one of the important and challenging topics in modern signal processing. The wavelet-shrinkage method is an important scheme for denosing processing. In the wavelet shrinkage method, the wavelet coefficients of the noisy signal are obtained by wavelet transform. These coefficients are used to estimate a suitable threshold for shrinkage of the original wavelet coefficients. After shrinkage, the reconstructed signal can be generated from the shrank wavelet coefficients using inverse wavelet transform. Though the wavelet shrinkage method is straight forward, it does not perform well in some cases. The empirical Wiener filtering is thus included to enhance the denosing ability. Nevertheless the performance of the empirical Wiener filtering method is not good in high SNR case. The cycle-spin method and wavelet shrinkage are combined for denoising processing to get better results than that of the empirical Wiener filtering method. However, the cycle-spin method needs a lot of processing time. The undecimated wavelet shrinkage method is introduced to improve the efficiency of the cycle-spin method. In this thesis, some improved algorithms for denoising of nonstationary signals are investigated and discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Yu-Jen, Tseng, and 曾裕仁. "De-noising of Left Ventricular Myocardial Boundaries in Magnetic Resonance Images." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/37462083657997862929.

Full text
Abstract:
碩士
大葉大學
工業工程研究所
88
Magnetic Resonance Imaging (MRI) is one of the most powerful radiological tools for diagnosis. MRI system is noninvasive and also provides the clear image to diagnosis the measuring of endocardial border and epicardial border in Left Ventricular. Detection of endocardial and epicardial borders of Left Ventricule can provide effective data for diagnose the heart disease such as Cardiomegalia and myocardial infarction. Because dynamic organs generate a huge number image production form MRI,it takes a long time to identify by using the manual tracing method. An effective computer aided diagnostic system is essential to maintain quality and reduce operating costs. By combining Wavelet-based images enhancement algorithm and dynamic programming based border detection algorithm,the endocardial and epicardial borders in Left Ventricule can be automatically measured. However,the detected borders are not smooth. Because the actual myocardial wall is smooth, the ideal borders should be smoothly closed curve. The purpose of this research is to apply digital filter to de-noise the automatically detected borders, which increases the accuracy of measurements. In this thesis, a wavelet-based de-noising technique and least-mean-square adaptive filter to de-noise the endocardial and epicardial borders. Experimental results show that the wavelet-based technique provides better performance than least-mean-square adaptive filter.
APA, Harvard, Vancouver, ISO, and other styles
22

Hong, Wun-De, and 洪文德. "Wavelet Theory-based De-noising FPGA For Power Line Communication Using OFDM." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/cw6wwn.

Full text
Abstract:
碩士
中原大學
電機工程研究所
99
Due to technology progress nowadays, the power-line at home can not only transfer power but also can transmit digital signals using modulation. The thesis uses OFDM for high-speed data transmission. This thesis has three parts: First, the sending part includes FEC, IFFT, I/Q modulation and so on. Second, the receiving part consists of demodulation, FFT, decoding and so on. Third, a coupling circuit is included. Finally, wavelet filter and extensive verification were used to reduce the error rate statistics on the experiments to achieve the goal. This thesis conducts verification on the practice and analysis on theory. This thesis considers AWGN, different lengths and loads. The parallel power line of different lengths and loads that will not change BER but BER will be changed by adding AWGN. Wavelet filter plays an important role to reduce the noise for power-line in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
23

tu, cin-hong, and 涂欽鴻. "Enhanced Contour Detection Using Phase Preserving De-noising Correction in Ultrasound Images." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/86866706310347855875.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
94
According to the growth of national income, people nowadays pay much more attention to the health issues; Corresponsively the medical care gets improved as well. Today, we have advanced medical devices such as ultrasonic instrument, computed tomography, position emission tomography and magnetic resonance imaging, in which the ultrasonic instruments are now in widespread used because of low-cost, non-side effect, mobile and non-aggressive superiorities. Ultrasonic instruments can be used as the first step medical inspection, and have become one of the popular medical instruments. Therefore,the thesis focuses on the ultrasonic images. Doctor can diagnose the patients using the organ contour by detected snake algorithm. However, the image is easily corrupy by noise when we capture the ultrasoundic signal. The thesis proposes a pre-processing system of contour detection combining with Log-Gabor filter、contrast enhancement 、histogram equalization and Canny edge algorithm to improve the quality of ultrasonic image. First, the parameters of Log-Gabor filter ,such as the minimum wavelength and the bandwith central frequency are defined to reduce noise. Then, contrast enhancement and histogram equalization are used to enhance image contrast, which makes the object boundary clearer. We apply Canny edge algorithm to compute the edge map. Finally the system can get the contour of ROI by using GVF-based snake method. The thesis demonstrates the pre-processing system increase the accuracy of the results from GVF-based snake when used in contour detection. The produced contour images important references for are the medical inspection.
APA, Harvard, Vancouver, ISO, and other styles
24

Jiang, Meng-Ting, and 江孟霆. "Using the relative relationship between subject and background for image de-noising." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/93222375923725718004.

Full text
Abstract:
碩士
輔仁大學
資訊工程學系碩士班
102
Nowadays, there are already several image noise reduction methods exist such as linear filter, median filter, wiener filter, gaussian filter, anisotropic filter, total variation image denoising, neighborhood filter. In image noise reduction, PSNR value is one of most important focal points. Unfortunately, all the methods above have one similar problem: stronger noise reduction cause more details loss. This thesis uses the non-local denoising algorithm and the interactive image segmentation by MSRM to solve the problem. The non-local algorithm is used to reduce image noise and the interactive image segmentation by MSRM is used to search the foregrounds and backgrounds of the images. Then, both kinds of regions will receive different levels of noise reductions. The system will search the foregrounds and backgrounds of the image once the user simply outline the area of the foreground image. Lower level of noise reduction is applied on the foreground image to maintain better details and higher level of noise reduction is applied on the background image. The result of this method shows a lesser noise image while maintains good details on the foreground image. Both PSNR values and visual results show good results.
APA, Harvard, Vancouver, ISO, and other styles
25

SNEKHA. "GENETIC ALGORITHM BASED ECG SIGNAL DE-NOISING USING EEMD AND FUZZY THRESHOLDING." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15346.

Full text
Abstract:
ElectroCardioGram (ECG) signal records electrical conduction activity of heart. These are very small signals in strength with narrow bandwidth of 0.05-120 Hz. Physicians especially cardiologist use these signals for diagnosis of the heart’s condition or heart diseases. ECG signal is contaminated with various artifacts such as Power Line Interference (PLI), Patient–electrode motion artifacts, Electrode-pop or contact noise, and Baseline Wandering and ElectroMyoGraphic (EMG) noise during acquisition. Analysis of ECG signals becomes difficult to inspect the cardiac activity in the presence of such unwanted signals. So, de-noising of ECG signal is extremely important to prevent misinterpretation of patient’s cardiac activity. Various method are available for de-noising the ECG signal such as Hybrid technique, Empirical Mode Decomposition, Un-decimated Wavelet Transform, Hilbert-Hung Transform, Adaptive Filtering, FIR Filtering, Morphological Filtering, Noise Invalidation Techniques, Non- Local Means Technique and S-Transform etc. All these techniques have some limitations such as mode mixing problem, oscillation in the reconstructed signals, reduced amplitude of the ECG signal and problem of degeneracy etc. To overcome the above mentioned limitations, a new technique is proposed for denoising of ECG signal based on Genetic Algorithm and EEMD with the help of Fuzzy Thresholding. EEMD methods are used to decompose the electrocardiogram signal into true Intrinsic Mode Functions (IMFs).Then the IMFs which are ruled by noise are automatically determined using Fuzzy Thresholding and then filtered using Genetic Particle Algorithms to remove the noise. Use of Genetic Particle Filter mitigates the sample degeneracy of Particle Filter (PF).EEMD is used in this thesis instead of EMD because it solves the EMD mode mixing problem. EEMD represents a major improvement with great versatility and robustness in noisy ECG signal filtering.
APA, Harvard, Vancouver, ISO, and other styles
26

Lu, I.-Chia, and 呂宜家. "Exploiting wavelet de-noising in the temporal sequencesof features for robust speech recognition." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/03773923951424999965.

Full text
Abstract:
碩士
國立暨南國際大學
電機工程學系
99
In this thesis, we propose to apply the wavelet de-noising (WD) techniques in temporal-domain feature sequences for enhancing the noise robustness in order to improve the accuracy of noisy speech recognition. In the proposed method, the temporal domain feature sequence is first processed by some specific statistic normalization scheme, such as mean and variance normalization (MVN) and cepstral gain normalization(CGN), and then dealt with the wavelet de-noising algorithm. With this process, we find that the wavelet de-noising procedure can effectively reduce the middle and high modulation frequency distortion remaining in the statistics-normalized speech features. On the Aurora-2 digit database and task, experimental results show that the above process can significantly improve the accuracy of speech recognition under noise environments. The pairing of WD and CMVN/CGN provides about 20% relative error reduction associated with the MFCC baseline, outperforms the individual CMVN/CGN, and makes the overall recognition rate beyond 90%.
APA, Harvard, Vancouver, ISO, and other styles
27

Tsai, Yi-Cheng, and 蔡一誠. "Application of Wavelet De-noising Techniques To Mean ScattererSpacing Estimation For Liver Tissue Characterization." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/95250299057267282139.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
94
Liver cirrhosis is a very frequent seen disease in Taiwan. Traditionally, doctors use ultrasound in detecting the liver-related diseases. However, the ratio of false diagnosis has increased due to the difference in human structures and the subjectivity of doctors. Therefore, in our research, we use wavelet transformation to calculate the mean square scattering distance of the ultrasound signals obtained from the ultrasound machine. Because of the great complexity of various tissues, there are many noises and attenuations of the signals. Therefore, we use a noise detecting method, and find a set of threshold to decrease the number of noise. Herein, we use both simulation and real signals to observe the efficiency of denoising and obtain signals with fewer noises. This study will help doctors in diagnosing liver-related diseases and decrease the man-made false diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
28

Teng, You-Yang, and 滕有揚. "The Research of Digital Signal Processing Chip Set Applied on Acoustic Signal De-noising." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/q7fn2b.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
92
In the process of transmitting various acoustic signals , because of being affected by the environment and all kinds of noises that permeate through the propagation channel, it is necessary to have an appropriate signal processing procedure to identify the signals which energy is already decreased by long distance transmission and environment interruption. This research is based in a wavelet-based method by choosing threshold value for de-noising. The procedure is divided into three stages: (1) Wavelet transform of the acoustic signals (2) Thresholding of wavelet signals (3) Inverse wavelet transform to reconstruct modified signals. The most important part is second stage, which uses different threshold-selecting rule to compare the performace for the recognition of the acoustic signals. The developed system is based on TI TMS320C6711 DSK. Since it reveal high performace for digital signal processing,it can reduce the training and recognizing time for the acoustic signal recognization .
APA, Harvard, Vancouver, ISO, and other styles
29

Chiao, Yu-Hua, and 焦郁華. "Mixed PDE Based Methods with Adaptive Block Truncation Coding for Image De-noising and Compression." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/78626738415621210245.

Full text
Abstract:
碩士
國立中興大學
應用數學系所
99
In this thesis, we propose an adaptive block truncation coding method (ABTC) for image compression. For achieving a better image quality, we propose a novel algorithm by a mixture of a upwind finite difference for solving a time dependent convection diffusion equation and the ABTC algorithm to remove the image noise. The numerical results show that our proposed methods are effectively remove the noise, and preserves the edge information well during the image compression process.
APA, Harvard, Vancouver, ISO, and other styles
30

Weng, Mu-Shen, and 翁睦盛. "A Study and Comparison on De-noising of Power Quality Transient Signal with Wavelet Transform." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/95221913866722380682.

Full text
Abstract:
碩士
中原大學
電機工程研究所
90
With the rapid developments of the high-tech industries as well as much more usages of the precise production equipments and test instruments, the far higher power quality (PQ) is demanded nowadays. However, the primary work of improving the power quality has to widely collect the power signals through the PQ monitoring instruments. Based the analysis of the PQ data collected, the causes and the problems of the PQ events can be inferred for the references of the PQ improvement. In the process of monitoring power signals, the PQ related signals are recorded via the A/D converter, digital fault recorder, and waveform data transmission and quantification. Noises always exist in the process and contaminate the PQ signals collected. The noise-contaminated signals often result in false alarms of the PQ monitor, especially for the transient events. To enhance the accuracy of PQ transient monitor for the transient event detection, there must be a high-efficiency de-noising scheme for eliminating the influences of the noises riding on the signals. The accuracy of the PQ transient monitor can, therefore, be enhanced. In processing the PQ transient signals, the traditional Fourier Transform (FT) extensively used for the observation of high-frequency transient signal, however, cannot determine precisely the time occurring points of the disturbance events. As a result, the FT is actually insufficient to detect the time occurring points of the transient events from the database for the PQ transient signals. In contrast, with the capabilities of multi-resolution and the characteristics of varying time-frequency windows on both the time and the frequency domains, the Wavelet Transform (WT) can indicate precisely the occurring points of the events, when the WT is applied to the high-frequency analysis using higher resolution in time domain. The WT is, therefore, extensively employed for the detection of transient signals in power systems. However, due to the existence of the noises as mentioned above, the accuracy of the WT in detection of transient signals is usually reduced greatly. On the other side, to reduce the influences of the noises riding on the signals, the WT-based de-noising approaches are also widely used. While eliminating the noises on the power signals, a threshold is given to prune the noises in the WT-based de-noising approaches. Nevertheless, the setting of a threshold heavily relies on experiences and field circumstances. As a consequence, the de-noising work appears to be both time- and effort-consuming. To solve the problems of threshold value determination, three de-noising algorithms, including adaptive de-noising, hypothetical testing de-noising, and space correlating de-noising methods, are proposed in this thesis for automatically determining the thresholds in accordance with the background noises. Through the de-noising methods provided in the thesis for the PQ transient signal monitoring, the abilities of the WT in detecting and localizing the disturbances can hence be restored. To evaluate and compare the feasibilities of the three WT based de-noising approaches for the PQ transient signals, the simulated data obtained from the MATLAB and Electro-Magnetic Transient Program (EMTP) programs as well as the filed data are used to test the three approaches. The testing results shows that the three de-noising approaches can overcome the influences of the noises successfully as expected. The occurring time points of the transient events can, therefore, accurately be detected and localized by the WT based approaches. The comparisons also reveal that if the hardware implementations are needed for the on-line de-noising applications, the third de-noising algorithm based on the space-correlating technologies is recommended for its simpler computing steps.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, G., Simon J. Shepherd, Clive B. Beggs, N. Rao, and Y. Zhang. "The use of kurtosis de-noising for EEG analysis of patients suffering from Alzheimer's disease." 2015. http://hdl.handle.net/10454/9242.

Full text
Abstract:
No
The use of electroencephalograms (EEGs) to diagnose and analyses Alzheimer's disease (AD) has received much attention in recent years. The sample entropy (SE) has been widely applied to the diagnosis of AD. In our study, nine EEGs from 21 scalp electrodes in 3 AD patients and 9 EEGs from 3 age-matched controls are recorded. The calculations show that the kurtoses of the AD patients' EEG are positive and much higher than that of the controls. This finding encourages us to introduce a kurtosis-based de-noising method. The 21-electrode EEG is first decomposed using independent component analysis (ICA), and second sort them using their kurtoses in ascending order. Finally, the subspace of EEG signal using back projection of only the last five components is reconstructed. SE will be calculated after the above de-noising preprocess. The classifications show that this method can significantly improve the accuracy of SE-based diagnosis. The kurtosis analysis of EEG may contribute to increasing the understanding of brain dysfunction in AD in a statistical way.
APA, Harvard, Vancouver, ISO, and other styles
32

Huang, Min-yu, and 黃敏煜. "A Discrete Wavelet Transform (DWT) based De-noising Circuit Design with its Applications to Medical Signal Processing." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/57371780328343232025.

Full text
Abstract:
碩士
長庚大學
電子工程研究所
93
Wavelet Transform is a multiresolution analysis that decomposes an original signal to multi-octave based functions, and we can analysis the original signal using these functions. It provides a novel and effective tool for many applications in signal processing area. Also, it has the advantage over the traditional Fourier Transform with respect to time-frequency analysis because of its characteristic of multiresolution. Therefore, it has been widely applied into many aspects of signal/image processing-related researches. In this thesis, we proposed and realized a Discrete Wavelet Transform (DWT) based de-noising circuit architecture with the applications into the noise reduction for medical signals. Here, our design was based on a three octave-level with Daubechies 4 filters. The circuit consists of three parts: DWT, thresholding, and IDWT. Software and hardware simulations were performed first. Furthermore, we implemented the de-noising circuit by downloading the Verilog codes to an FPGA to observe its practical processing ability. As a result, by inputting a noisy Electrocardiogram (ECG) into the de-noising circuit we found that the circuit satisfied the requirement of real-time processing, and also achieved pretty good performance for noise reduction.
APA, Harvard, Vancouver, ISO, and other styles
33

Ting, Tzu-hsuan, and 丁子軒. "Combining Deep De-noising Auto-encoder and Recurrent Neural Network in End-to-end Speech Recognition for Noise Robustness." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/nrcpz2.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
106
In this paper, we implement an end-to-end noise-robustness speech recognition system on Aurora 2.0 dataset through combining deep de-noising auto-encoders and recurrent neural networks. At front-end we use fully connected auto-encoder (FCDAE) to deal with noisy data. We propose two efficient methods to improve denoising performance when training FCDAE. The first method is to plus different weights for the loss value of distinct signal-to-noise ratio data. The second method is change the way of use on training data. Finally, we combine the two methods and get the best experimental results. For the back-end speech recognition, we use an end-to-end system based on bidirectional recurrent neural network which is trained via connectionist temporal classification criterion, and compared to a baseline backend based on hidden Markov models and Gaussian mixture models (HMM-GMM). With integrating FCDAE and recognition models, we get 94.20% word accuracy rate in clean condition, and 94.24% word accuracy rate in multi condition. The two results have a relative improvement rate of 65% and 20% compared with the baseline experiments, of which 94.20% is obtained by the FCDAE and HMM-GMM, and 94.24% is obtained by combining the FCDAE and bidirectional recurrent neural network.
APA, Harvard, Vancouver, ISO, and other styles
34

Parravicini, Giovanni. "A factor augmented vector autoregressive model and a stacked de-noising auto-encoders forecast combination to predict the price of oil." Master's thesis, 2019. http://hdl.handle.net/10362/73196.

Full text
Abstract:
The following dissertation aims to show the benefits of a forecast combination between an econometric and a deep learning approach. On one side, a Factor Augmented Vector Autoregressive Model (FAVAR) with naming variables identification following Stock and Watson (2016)1; on the other side, a Stacked De-noising Auto-Encoder with Bagging (SDAE-B) following Zhao, Li and Yu (2017)2 are implemented. From January 2010 to September 2018 Two-hundred-eighty-one monthly series are used to predict the price of the West Texas Intermediate (WTI). The model performance is analysed by Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE) and Directional Accuracy (DA). The combination benefits from both SDAE-B’s high accuracy and FAVAR’s interpretation features through impulse response functions (IRFs) and forecast error variance decomposition (FEVD).
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Chia-Chou, and 劉佳洲. "On the Application of the De-noising Method of Stationary Wavelet Coefficients Threshold to Filter Out Noise in Digital Hearing Aids." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/43887541206717616847.

Full text
Abstract:
碩士
臺灣大學
工程科學及海洋工程學研究所
98
For a long time, improving the hearing of the hearing-impaired has been what researchers and medical professionals been struggling to achieve. Because there are currently over 200 million deaf or hard of hearing people worldwide, researchers and medical professionals realize the importance of this goal. Fortunately, the gift of technology, from early analog hearing aids to the mainstream of digital hearing aids, has brought about various kinds of flourishing digital signal processing technology. The function of current hearing aids is no longer restricted to just simple voice amplification, which allows the hearing-impaired to hear directly, but can also satisfy the different needs of different users with different sound signal processing. In fact, the development of hearing aids still has an opportunity for improvement. In this paper, the white noise is added to the clean voice signal, becoming a voice signal that contains noise. First, the discrete wavelet transform is used to cut the voice bandwidth into nine different sub-band bandwidths. Second, the discrete wavelet stationary transform is used to cut the voice bandwidth into nine different sub-band bandwidths. Third, the wavelet packet transform is used to cut the voice bandwidth into eight identical bandwidths. The wavelet de-noising method is used to filter out high-frequency noise. After the voice signal has been de-noised, it makes up four different types of hearing loss, including 40dB uniform hearing loss, mild low-frequency hearing loss, moderate high-frequency hearing loss, and severe high-frequency hearing loss. Finally, the saturated volume limits the final output of the energy of speech to a fixed size. This thesis is to simulate voice signal processing by the wavelet transform. The process of verification can effectively filter out white noise, and compensate the four different types of hearing loss to achieve the basic functions of digital hearing aids.
APA, Harvard, Vancouver, ISO, and other styles
36

Chien, Hsin-Kai, and 錢信凱. "A Study of Images Recognition and De-noising with Varying Emissivity and Temperature Levels by Using the Middle Wave Infrared Camera." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/71882083749283351026.

Full text
Abstract:
碩士
國立高雄應用科技大學
模具工程系
98
In this study, Middle Wave Infrared (MWIR) is used to acquire infrared image of the target object with varying background temperature and emissivity to discuss the displayed infrared images. Then, image processing methods are used to make recognition and de-noising. When temperature between the target object and background is near, the quality of infrared images will be influenced obviously to make recognition hard and noise of the ambient images will be increased. Moreover, the images with larger emissivity difference are clearer than smaller ones. In order to expand MWIR images in practical measurement, it is important to acquire reliable images and use image processing methods. The purpose of the experiment is to obtain infrared images of three target objects based on eight different backgrounds emissivity (stainless steel with gray paint εb=0.7, cast iron εb=0.92, white paper εb=0.93, metal with gray paint εb=0.94, wood εb=0.95, blue cloth εb=0.96, black paper εb=0.98, stainless steel with black paint εb=0.99) and five different temperatures (31°C、33°C、35°C、37°C、39°C).In the experiment environment, experimental box is used to reduce environmental error, and the average environmental temperature is 25℃. Moreover, digital thermocouple and middle wave infrared camera are used to find out the emissivity of the background and target object respectively, and plate type temperature keeper is used to heat to different temperatures. After selecting infrared images of three target objects, image processing methods are used to enhance blurred images, edge recognition and noise remove. This study will explain the meaning of image processing methods for infrared images. At present, it can acquire target objects contours and remove noise which influenced recognition after transferring blurred images to gray level images.
APA, Harvard, Vancouver, ISO, and other styles
37

Pandey, Santosh Kumar. "Signal Processing Tools To Enhance Interpretation Of Impulse Tests On Power Transformers." Thesis, 1997. https://etd.iisc.ac.in/handle/2005/1821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Pandey, Santosh Kumar. "Signal Processing Tools To Enhance Interpretation Of Impulse Tests On Power Transformers." Thesis, 1997. http://etd.iisc.ernet.in/handle/2005/1821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Salgado, Patarroyo Ivan Camilo. "Spatially Regularized Spherical Reconstruction: A Cross-Domain Filtering Approach for HARDI Signals." Thesis, 2013. http://hdl.handle.net/10012/7847.

Full text
Abstract:
Despite the immense advances of science and medicine in recent years, several aspects regarding the physiology and the anatomy of the human brain are yet to be discovered and understood. A particularly challenging area in the study of human brain anatomy is that of brain connectivity, which describes the intricate means by which different regions of the brain interact with each other. The study of brain connectivity is deeply dependent on understanding the organization of white matter. The latter is predominantly comprised of bundles of myelinated axons, which serve as connecting pathways between approximately 10¹¹ neurons in the brain. Consequently, the delineation of fine anatomical details of white matter represents a highly challenging objective, and it is still an active area of research in the fields of neuroimaging and neuroscience, in general. Recent advances in medical imaging have resulted in a quantum leap in our understanding of brain anatomy and functionality. In particular, the advent of diffusion magnetic resonance imaging (dMRI) has provided researchers with a non-invasive means to infer information about the connectivity of the human brain. In a nutshell, dMRI is a set of imaging tools which aim at quantifying the process of water diffusion within the human brain to delineate the complex structural configurations of the white matter. Among the existing tools of dMRI high angular resolution diffusion imaging (HARDI) offers a desirable trade-off between its reconstruction accuracy and practical feasibility. In particular, HARDI excels in its ability to delineate complex directional patterns of the neural pathways throughout the brain, while remaining feasible for many clinical applications. Unfortunately, HARDI presents a fundamental trade-off between its ability to discriminate crossings of neural fiber tracts (i.e., its angular resolution) and the signal-to-noise ratio (SNR) of its associated images. Consequently, given that the angular resolution is of fundamental importance in the context of dMRI reconstruction, there is a need for effective algorithms for de-noising HARDI data. In this regard, the most effective de-noising approaches have been observed to be those which exploit both the angular and the spatial-domain regularity of HARDI signals. Accordingly, in this thesis, we propose a formulation of the problem of reconstruction of HARDI signals which incorporates regularization assumptions on both their angular and their spatial domains, while leading to a particularly simple numerical implementation. Experimental evidence suggests that the resulting cross-domain regularization procedure outperforms many other state of the art HARDI de-noising methods. Moreover, the proposed implementation of the algorithm supersedes the original reconstruction problem by a sequence of efficient filters which can be executed in parallel, suggesting its computational advantages over alternative implementations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography