To see the other types of publications on this topic, follow the link: Denoising Images.

Dissertations / Theses on the topic 'Denoising Images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Denoising Images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rafi, Nazari Mina. "Denoising and Demosaicking of Color Images." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35802.

Full text
Abstract:
Most digital cameras capture images through Color Filter Arrays (CFA), and reconstruct the full color image from the CFA image. Each CFA pixel only captures one primary color component at each pixel location; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. Some other CFAs contain four color filters. The additional filter is a panchromatic/white filter, and it usually receives the full light spectrum. In this research, we studied and compared different four channel CFAs with panchromatic/white filter, and compared them with three channel CFAs. An appropriate demosaicking algorithm has been developed for each CFA. The most well-known three-channel CFA is Bayer. The Fujifilm X-Trans pattern has been studied in this work as another three-channel CFA with a different structure. Three different four-channel CFAs have been discussed in this research: RGBW-Kodak, RGBW-Bayer and RGBW- $5 \times 5$. The structure and the number of filters for each color are different for these CFAs. Since the Least-Square Luma-Chroma Demultiplexing method is a state of the art demosaicking method for the Bayer CFA, we designed the Least-Square method for RGBW CFAs. The effect of noise on different CFA patterns will be discussed for four channel CFAs. The Kodak database has been used to evaluate our non-adaptive and adaptive demosaicking methods as well as the optimized algorithms with the least square method. The captured values of white (panchromatic/clear) filters in RGBW CFAs have been estimated using red, green and blue filter values. Sets of optimized coefficients have been proposed to estimate the white filter values accurately. The results have been validated using the actual white values of a hyperspectral image dataset. A new denoising-demosaicking method for RGBW-Bayer CFA has been presented in this research. The algorithm has been tested on the Kodak dataset using the estimated value of white filters and a hyperspectral image dataset using the actual value of white filters, and the results have been compared. The results in both cases have been compared with the previous works on RGB-Bayer CFA, and it shows that the proposed algorithm using RGBW-Bayer CFA is working better than RGB-Bayer CFA in presence of noise.
APA, Harvard, Vancouver, ISO, and other styles
2

Björling, Robin. "Denoising of Infrared Images Using Independent Component Analysis." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4954.

Full text
Abstract:

Denna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer.


The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.

APA, Harvard, Vancouver, ISO, and other styles
3

Dacke, Fredrik. "Non-local means denoising ofprojection images in cone beamcomputed tomography." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122419.

Full text
Abstract:
A new edge preserving denoising method is used to increase image quality in cone beam computed tomography. The reconstruction algorithm for cone beam computed tomography used by Elekta enhances high frequency image details, e.g. noise, and we propose that denoising is done on the projection images before reconstruction. The denoising method is shown to have a connection with computational statistics and some mathematical improvements to the method are considered. Comparisons are made with the state-of-theart method on both artificial and physical objects. The results show that the smoothness of the images is enhanced at the cost of blurring out image details. Some results show how the setting of the method parameters influence the trade off between smoothness and blurred image details in the images.
En ny kantbevarande brusreduceringsmetod används för att förbättra bildkvaliteten för digital volymtomografi med konstråle. Rekonstruktionsalgoritmen for digital volymtomografi med konstråle som används av Elekta förstärker högfrekventa bilddetaljer, t.ex. brus, och vi föreslår att brusreduceringen genomförs på projektionsbilderna innan de genomgår rekonstruktion. Den brusreducerande metoden visas ha kopplingar till datorintensiv statistik och några matematiska förbättringar av metoden gås igenom. Jämförelser görs med den bästa metoden på både artificiella och fysiska objekt. Resultaten visar att mjukheten i bilderna förbättras på bekostnad av utsmetade bilddetaljer. Vissa resultat visar hur parametersättningen för metoden påverkar avvägningen mellan mjukhet och utsmetade bilddetaljer i bilderna.
APA, Harvard, Vancouver, ISO, and other styles
4

Papoutsellis, Evangelos. "First-order gradient regularisation methods for image restoration : reconstruction of tomographic images with thin structures and denoising piecewise affine images." Thesis, University of Cambridge, 2016. https://www.repository.cam.ac.uk/handle/1810/256216.

Full text
Abstract:
The focus of this thesis is variational image restoration techniques that involve novel non-smooth first-order gradient regularisers: Total Variation (TV) regularisation in image and data space for reconstruction of thin structures from PET data and regularisers given by an infimal-convolution of TV and $L^p$ seminorms for denoising images with piecewise affine structures. In the first part of this thesis, we present a novel variational model for PET reconstruction. During a PET scan, we encounter two different spaces: the sinogram space that consists of all the PET data collected from the detectors and the image space where the reconstruction of the unknown density is finally obtained. Unlike most of the state of the art reconstruction methods in which an appropriate regulariser is designed in the image space only, we introduce a new variational method incorporating regularisation in image and sinogram space. In particular, the corresponding minimisation problem is formed by a total variational regularisation on both the sinogram and the image and with a suitable weighted $L^2$ fidelity term, which serves as an approximation to the Poisson noise model for PET. We establish the well-posedness of this new model for functions of Bounded Variation (BV) and perform an error analysis through the notion of the Bregman distance. We examine analytically how TV regularisation on the sinogram affects the reconstructed image especially the boundaries of objects in the image. This analysis motivates the use of a combined regularisation principally for reconstructing images with thin structures. In the second part of this thesis we propose a first-order regulariser that is a combination of the total variation and $L^p$ seminorms with $1 < p \le \infty$. A well-posedness analysis is presented and a detailed study of the one dimensional model is performed by computing exact solutions for simple functions such as the step function and a piecewise affine function, for the regulariser with $p = 2$ and $p = 1$. We derive necessary and sufficient conditions for a pair in $BV \times L^p$ to be a solution for our proposed model and determine the structure of solutions dependent on the value of $p$. In the case $p = 2$, we show that the regulariser is equivalent to the Huber-type variant of total variation regularisation. Moreover, there is a certain class of one dimensional data functions for which the regularised solutions are equivalent to high-order regularisers such as the state of the art total generalised variation (TGV) model. The key assets of our regulariser are the elimination of the staircasing effect - a well-known disadvantage of total variation regularisation - the capability of obtaining piecewise affine structures for $p = 1$ and qualitatively comparable results to TGV. In addition, our first-order $TVL^p$ regulariser is capable of preserving spike-like structures that TGV is forced to smooth. The numerical solution of the proposed first-order model is in general computationally more efficient compared to high-order approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Roussel, Nicolas. "Denoising of Dual Energy X-ray Absorptiometry Images and Vertebra Segmentation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233845.

Full text
Abstract:
Dual Energy X-ray Absorptiometry (DXA) is amedical imaging modality used to quantify bone mineral density and to detect fractures. It is widely used due to its cheap cost and low radiation dose, however it produces noisy images that can be difficult to interpret for a human expert or a machine. In this study, we investigate denoising on DXA lateral spine images and automatic vertebra segmentation in the resulting images. For denoising, we design adaptive filters to avoid the frequent apparition of edge artifacts (cross contamination), and validate our results with an observer experiment. Segmentation is performed using deep convolutional neural networks trained on manually segmented DXA images. Using few training images, we focus on depth of the network and on the amount of training data. At the best depth, we report a 94 % mean Dice on test images, with no post-processing. We also investigate the application of a network trained on one of our databases to the other (different resolution). We show that in some cases, cross contamination can degrade the segmentation results and that the use of our adaptive filters helps solving this problem. Our results reveal that even with little data and a short training, neural networks produce accurate segmentations. This suggests they could be used for fracture classification. However, the results should be validated on bigger databases with more fracture cases and other pathologies.
Dual Energy X-ray Absorptiometry (DXA) är en medicinsk bildbehandlingmodalitetsom används för att kvantifiera bentäthet och upptäckafrakturer. Det används i stor utsträckning tack vare sin låga kostnadoch sin låga exponering, men producerar brusiga bilder som kanvara svåra att förstå för en mänsklig expert eller en maskin. I den här studien undersöker vi avbrusning i DXA i laterala ryggradsbilderoch automatisk segmentering av ryggkotorna i de resulterandebilderna. För avbrusning skapar vi adaptiva filter för att förhindrafrekventa kantartefakter (korskontaminering), och validerar våraresultat med ett observatörsexperiment. Segmentering utförs medanvändning av djupa konvolutionella neuronnät tränade på manuelltsegmenterade DXA-bilder. Med få träningsbilder fokuserar vi pånätverksdjup och mängden träningsdata. På bästa djup rapporterarvi 94% medel-Dice på testbilder utan efterbehandling. Vi undersökerockså tillämpning av ett nätverk tränat på en av våra databaser till enannan databas (annan upplösning). Vi visar att i vissa fall kan korskontamineringförsämra segmenteringsresultatet och att användningenav våra adaptiva filter hjälper till att lösa problemet. Våra resultatvisar att även med få data och korta träningar så producerar neuuronnätkor- rekta segmenteringar. Detta tyder på att de kunde användasför frak- turklassificering. Dock, resultaten bör valideras på större databasermed fler fall av frakturer och andra patologier.
APA, Harvard, Vancouver, ISO, and other styles
6

Hua, Yuai, Jianmei Lu, Huayong Zhang, Jinyong Cheng, Wei Liang, and Tianduo Li. "Denoising and Segmentation of MCT Slice Images of Leather Fiber - 170." Verein für Gerberei-Chemie und -Technik e. V, 2019. https://slub.qucosa.de/id/qucosa%3A34310.

Full text
Abstract:
Content: The braiding structure of leather fibers has not been understood clearly and it is very useful and interesting to study it. Microscopic X-ray tomography (MCT) technology can produce cross-sectional images of the leather without destroying its structure. The three-dimensional structure of leather fibers can be reconstructed by using MCT slice images, so as to show the braiding structure and regularity of leather fibers. The denoising and segmentation of MCT slice images of leather fibers is the basic procedure for three-dimensional reconstruction. In order to study the braiding structure of leather fibers in the round, the image of resinembedded leather fibers MCT slices and in situ leather fibers MCT slices were analyzed and processed. It is showed that the resin-embedded leather fiber MCT slices were quite different from that of in situ leather fiber MCT slices. In-situ leather fiber MCT slice image could be denoised relatively easily. But denoising of resin-embedded leather fiber MCT slice image is a challenge because of its strong noise. In addition, some fiber bundles adhere to each other in the slice image, which are difficult to be segmented. There are many methods of image denoising and segmentation, but there is no general method to process all types of images. In this paper, a series of computer-aided denoising and segmentation algorithms are designed for in-situ MCT slice images of leather fibers and resin-embedded MCT slice images. The fiber bundles in wide field MCT images are distributed densely, adherent to each other. Many fiber bundles are separated in one image and tightly bound in another. This brings great difficulties to image segmentation. To solve this problem, the following segmentation methods are used: Grayscale-threshold segmentation method, The region-growing segmentation method, Three-dimensional image segmentation method. The denoising and segmentation algorithm proposed in this paper has remarkable effect in processing a series of original MCT slice images and resin-embedded leather fibers MCT slice images. A series of threedimensional images based on this work demonstrate the fine spatial braiding structure of leather fiber, which would help us to understand the braiding structure of leather fibers better. Take-Away: presentation ppt, Figures
APA, Harvard, Vancouver, ISO, and other styles
7

Nifong, Nathaniel H. "Learning General Features From Images and Audio With Stacked Denoising Autoencoders." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1550.

Full text
Abstract:
One of the most impressive qualities of the brain is its neuro-plasticity. The neocortex has roughly the same structure throughout its whole surface, yet it is involved in a variety of different tasks from vision to motor control, and regions which once performed one task can learn to perform another. Machine learning algorithms which aim to be plausible models of the neocortex should also display this plasticity. One such candidate is the stacked denoising autoencoder (SDA). SDA's have shown promising results in the field of machine perception where they have been used to learn abstract features from unlabeled data. In this thesis I develop a flexible distributed implementation of an SDA and train it on images and audio spectrograms to experimentally determine properties comparable to neuro-plasticity. Specifically, I compare the visual-auditory generalization between a multi-level denoising autoencoder trained with greedy, layer-wise pre-training (GLWPT), to one trained without. I test a hypothesis that multi-modal networks will perform better than uni-modal networks due to the greater generality of features that may be learned. Furthermore, I also test the hypothesis that the magnitude of improvement gained from this multi-modal training is greater when GLWPT is applied than when it is not. My findings indicate that these hypotheses were not confirmed, but that GLWPT still helps multi-modal networks adapt to their second sensory modality.
APA, Harvard, Vancouver, ISO, and other styles
8

Trinh, Dinh Hoan. "Denoising and super-resolution for medical images by example-based learning approach." Paris 13, 2013. http://scbd-sto.univ-paris13.fr/secure/edgalilee_th_2013_trinh.pdf.

Full text
Abstract:
L’objectif de cette thèse est d’élaborer des méthodes efficaces pour le débruitage et la super-résolution afin d’améliorer la qualité et la résolution spatiale des images médicales. En particulier, nous sommes motivés par le challenge d’intégrer le problème de débruitage et de super-résolution dans la même formulation. Nos méthodes utilisent des images standards ou d’exemples localisées à proximité de l’image considérée pour le débruitage et/ou pour la super-résolution. Pour le problème de débruitage, nous introduisons trois nouvelles méthodes qui permettent de réduire certains bruits couramment trouvés sur les images médicales. La première méthode est construite sur la base de la Régression Rigide à noyau. Cette méthode peut être appliquée au bruit Gaussien et au bruit Ricien. Pour la deuxième méthode, le débruitage est effectué par le modèle de régression construit sur les K-plus proches voisins. Cette méthode peut être utilisée pour réduire le bruit Gaussien et le bruit Poisson. Nous proposons dans la troisième méthode, un modèle de représentation parcimonieuse pour éliminer le bruit Gaussian sur des images CT à faible dose. Les méthodes de débruitage proposées sont compétitives avec les approches existantes. Pour la super-résolution, nous proposons deux nouvelles méthodes mono-image basées d’exemples. La première méthode est une méthode géométrique par projection sur l’enveloppe convexe. Pour la deuxième méthode, la super-résolution est effectuée via un modèle de représentation parcimonieuse. Les résultats expérimentaux obtenus montrent que les méthodes proposées sont très efficaces pour les images médicales qui sont souvent affectées par les bruits.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Weiying. "Multitemporal SAR images denoising and change detection : applications to Sentinel-1 data." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT003/document.

Full text
Abstract:
Le bruit de chatoiement (speckle) lié aux systèmes d'imagerie cohérente a des conséquences sur l'analyse et l'interprétation des images radar à synthèse d'ouverture (RSO). Pour corriger ce défaut, nous profitons de séries temporelles d'images RSO bien recalées. Nous améliorons le filtre adaptatif temporel non-local à l'aide de méthodes performantes de débruitage adaptatif et proposons un filtrage temporel adaptatif basé sur les patchs. Pour réduire le biais du débruitage, nous proposons une méthode originale, rapide et efficace de débruitage multitemporel. L'idée principale de l'approche proposée est d'utiliser l'image dite "de ratio", donnée par le rapport entre l'image et la moyenne temporelle de la pile. Cette image de ratio est plus facile à débruiter qu'une image isolée en raison de sa meilleure stationnarité. Par ailleurs, les structures fines stables dans le temps sont bien préservées grâce au moyennage multitemporel. Disposant d'images débruitées, nous proposons ensuite d'utiliser la méthode du rapport de vraisemblance généralisé simplifié pour détecter les zones de changement ainsi que l'amplitude des changements et les instants de changements intéressants dans de longues séries d'images correctement recalées. En utilisant le partitionnement spectral, on applique le rapport de vraisemblance généralisé simplifié pour caractériser les changements des séries temporelles. Nous visualisons les résultats de détection en utilisant l'échelle de couleur 'jet' et une colorisation HSV. Ces méthodes ont été appliquées avec succès pour étudier des zones cultivées, des zones urbaines, des régions portuaires et des changements dus à des inondations
The inherent speckle which is attached to any coherent imaging system affects the analysis and interpretation of synthetic aperture radar (SAR) images. To take advantage of well-registered multi-temporal SAR images, we improve the adaptive nonlocal temporal filter with state-of-the-art adaptive denoising methods and propose a patch based adaptive temporal filter. To address the bias problem of the denoising results, we propose a fast and efficient multitemporal despeckling method. The key idea of the proposed approach is the use of the ratio image, provided by the ratio between an image and the temporal mean of the stack. This ratio image is easier to denoise than a single image thanks to its improved stationarity. Besides, temporally stable thin structures are well-preserved thanks to the multi-temporal mean. Without reference image, we propose to use a patch-based auto-covariance residual evaluation method to examine the residual image and look for possible remaining structural contents. With speckle reduction images, we propose to use simplified generalized likelihood ratio method to detect the change area, change magnitude and change times in long series of well-registered images. Based on spectral clustering, we apply the simplified generalized likelihood ratio to detect the time series change types. Then, jet colormap and HSV colorization may be used to vividly visualize the detection results. These methods have been successfully applied to monitor farmland area, urban area, harbor region, and flooding area changes
APA, Harvard, Vancouver, ISO, and other styles
10

Briand, Thibaud. "Image Formation from a Large Sequence of RAW Images : performance and accuracy." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1017/document.

Full text
Abstract:
Le but de cette thèse est de construire une image couleur de haute qualité, contenant un faible niveau de bruit et d'aliasing, à partir d'une grande séquence (e.g. des centaines) d'images RAW prises avec un appareil photo grand public. C’est un problème complexe nécessitant d'effectuer à la volée du dématriçage, du débruitage et de la super-résolution. Les algorithmes existants produisent des images de haute qualité, mais le nombre d'images d'entrée est limité par des coûts de calcul et de mémoire importants. Dans cette thèse, nous proposons un algorithme de fusion d'images qui les traite séquentiellement de sorte que le coût mémoire ne dépend que de la taille de l'image de sortie. Après un pré-traitement, les images mosaïquées sont recalées en utilisant une méthode en deux étapes que nous introduisons. Ensuite, une image couleur est calculée par accumulation des données irrégulièrement échantillonnées en utilisant une régression à noyau classique. Enfin, le flou introduit est supprimé en appliquant l'inverse du filtre équivalent asymptotique correspondant (que nous introduisons). Nous évaluons la performance et la précision de chaque étape de notre algorithme sur des données synthétiques et réelles. Nous montrons que pour une grande séquence d'images, notre méthode augmente avec succès la résolution et le bruit résiduel diminue comme prévu. Nos résultats sont similaires à des méthodes plus lentes et plus gourmandes en mémoire. Comme la génération de données nécessite une méthode d'interpolation, nous étudions également les méthodes d'interpolation par polynôme trigonométrique et B-spline. Nous déduisons de cette étude de nouvelles méthodes d'interpolation affinées
The aim of this thesis is to build a high-quality color image, containing a low level of noise and aliasing, from a large sequence (e.g. hundreds or thousands) of RAW images taken with a consumer camera. This is a challenging issue requiring to perform on the fly demosaicking, denoising and super-resolution. Existing algorithms produce high-quality images but the number of input images is limited by severe computational and memory costs. In this thesis we propose an image fusion algorithm that processes the images sequentially so that the memory cost only depends on the size of the output image. After a preprocessing step, the mosaicked (or CFA) images are aligned in a common system of coordinates using a two-step registration method that we introduce. Then, a color image is computed by accumulation of the irregularly sampled data using classical kernel regression. Finally, the blur introduced is removed by applying the inverse of the corresponding asymptotic equivalent filter (that we introduce).We evaluate the performance and the accuracy of each step of our algorithm on synthetic and real data. We find that for a large sequence of RAW images, our method successfully performs super-resolution and the residual noise decreases as expected. We obtained results similar to those obtained by slower and memory greedy methods. As generating synthetic data requires an interpolation method, we also study in detail the trigonometric polynomial and B-spline interpolation methods. We derive from this study new fine-tuned interpolation methods
APA, Harvard, Vancouver, ISO, and other styles
11

Didas, Stephan [Verfasser], and Joachim [Akademischer Betreuer] Weickert. "Denoising and enhancement of digital images : variational methods, integrodifferential equations, and wavelets / Stephan Didas. Betreuer: Joachim Weickert." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/105105673X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nasser, Khalafallah Mahmoud Lamees. "A dictionary-based denoising method toward a robust segmentation of noisy and densely packed nuclei in 3D biological microscopy images." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS283.pdf.

Full text
Abstract:
Les cellules sont les éléments constitutifs de base de tout organisme vivant. Tous les organismes vivants partagent des processus vitaux tels que croissance, développement, mouvement, nutrition, excrétion, reproduction, respiration et réaction à l’environnement. En biologie cellulaire, comprendre la structure et fonction des cellules est essentielle pour développer et tester de nouveaux médicaments. Par ailleurs, cela aide aussi à l’étude du développement embryonnaire. Enfin, cela permet aux chercheurs de mieux comprendre les effets des mutations et de diverses maladies. La vidéo-microscopie (Time Lapse Fluorescence Microscopie) est l’une des techniques d’imagerie les plus utilisées afin de quantifier diverses caractéristiques des processus cellulaires, à savoir la survie, la prolifération, la migration ou la différenciation cellulaire. En vidéo-microscopie, non seulement les informations spatiales sont disponibles, mais aussi les informations temporelles en réitérant l’acquisition de l’échantillon, et enfin les informations spectrales, ce qui génère des données dites « cinq dimensions » (X, Y, Z + temps + canal). En règle générale, les jeux de données générés consistent en plusieurs (centaines ou milliers) d’images, chacune contenant des centaines ou milliers d’objets à analyser. Pour effectuer une quantification précise et à haut débit des processus cellulaires, les étapes de segmentation et de suivi des noyaux cellulaires doivent être effectuées de manière automatisée. Cependant, la segmentation et le suivi des noyaux sont des tâches difficiles dû notamment au bruit intrinsèque dans les images, à l’inhomogénéité de l’intensité, au changement de forme des noyaux ainsi qu’à un faible contraste des noyaux. Bien que plusieurs approches de segmentation des noyaux aient été rapportées dans la littérature, le fait de traiter le bruit intrinsèque reste la partie la plus difficile de tout algorithme de segmentation. Nous proposons un nouvel algorithme de débruitage 3D, basé sur l’apprentissage d’un dictionnaire non supervisé et une représentation parcimonieuse, qui à la fois améliore la visualisation des noyaux très peu contrastés et bruités, mais aussi détecte simultanément la position de ces noyaux avec précision. De plus, notre méthode possède un nombre limité de paramètres, un seul étant critique, à savoir la taille approximative des objets à traiter. Le cadre de la méthode proposée comprend le débruitage d’images, la détection des noyaux et leur segmentation. Dans l’étape de débruitage, un dictionnaire initial est construit en sélectionnant des régions (patches) aléatoires dans l’image originale, puis une technique itérative est implémentée pour mettre à jour ce dictionnaire afin d’obtenir un dictionnaire dont les éléments mis à jour présentent un meilleur contraste. Ensuite, une carte de détection, basée sur le calcul des coefficients du dictionnaire utilisés pour débruiter l’image, est utilisée pour détecter le centre approximatif des noyaux qui serviront de marqueurs pour la segmentation. Ensuite, une approche basée sur le seuillage est proposée pour obtenir le masque de segmentation des noyaux. Finalement, une approche de segmentation par partage des eaux contrôlée par les marqueurs est utilisée pour obtenir le résultat final de segmentation des noyaux. Nous avons créé des images synthétiques 3D afin d’étudier l’effet des paramètres de notre méthode sur la détection et la segmentation des noyaux, et pour comprendre le mécanisme global de sélection et de réglage de ces paramètres significatifs sur différents jeux de données
Cells are the basic building blocks of all living organisms. All living organisms share life processes such as growth and development, movement, nutrition, excretion, reproduction, respiration and response to the environment. In cell biology research, understanding cells structure and function is essential for developing and testing new drugs. In addition, cell biology research provides a powerful tool to study embryo development. Furthermore, it helps the scientific research community to understand the effects of mutations and various diseases. Time-Lapse Fluorescence Microscopy (TLFM) is one of the most appreciated imaging techniques which can be used in live-cell imaging experiments to quantify various characteristics of cellular processes, i.e., cell survival, proliferation, migration, and differentiation. In TLFM imaging, not only spatial information is acquired, but also temporal information obtained by repeating imaging of a labeled sample at specific time points, as well as spectral information, that produces up to five-dimensional (X, Y, Z + Time + Channel) images. Typically, the generated datasets consist of several (hundreds or thousands) images, each containing hundreds to thousands of objects to be analyzed. To perform high-throughput quantification of cellular processes, nuclei segmentation and tracking should be performed in an automated manner. Nevertheless, nuclei segmentation and tracking are challenging tasks due to embedded noise, intensity inhomogeneity, shape variation as well as a weak boundary of nuclei. Although several nuclei segmentation approaches have been reported in the literature, dealing with embedded noise remains the most challenging part of any segmentation algorithm. We propose a novel 3D denoising algorithm, based on unsupervised dictionary learning and sparse representation, that can both enhance very faint and noisy nuclei, in addition, it simultaneously detects nuclei position accurately. Furthermore, our method is based on a limited number of parameters, with only one being critical, which is the approximate size of the objects of interest. The framework of the proposed method comprises image denoising, nuclei detection, and segmentation. In the denoising step, an initial dictionary is constructed by selecting random patches from the raw image then an iterative technique is implemented to update the dictionary and obtain the final one which is less noisy. Next, a detection map, based on the dictionary coefficients used to denoise the image, is used to detect marker points. Afterward, a thresholding-based approach is proposed to get the segmentation mask. Finally, a marker-controlled watershed approach is used to get the final nuclei segmentation result. We generate 3D synthetic images to study the effect of the few parameters of our method on cell nuclei detection and segmentation, and to understand the overall mechanism for selecting and tuning the significant parameters of the several datasets. These synthetic images have low contrast and low signal to noise ratio. Furthermore, they include touching spheres where these conditions simulate the same characteristics exist in the real datasets. The proposed framework shows that integrating our denoising method along with classical segmentation method works properly in the context of the most challenging cases. To evaluate the performance of the proposed method, two datasets from the cell tracking challenge are extensively tested. Across all datasets, the proposed method achieved very promising results with 96.96% recall for the C.elegans dataset. Besides, in the Drosophila dataset, our method achieved very high recall (99.3%)
APA, Harvard, Vancouver, ISO, and other styles
13

Yousif, Osama. "Change Detection Using Multitemporal SAR Images." Licentiate thesis, KTH, Geodesi och geoinformatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123494.

Full text
Abstract:
Multitemporal SAR images have been used successfully for the detection of different types of environmental changes. The detection of urban change using SAR images is complicated due to the special characteristics of SAR images—for example, the existence of speckle and the complex mixture of the urban environment. This thesis investigates the detection of urban changes using SAR images with the following specific objectives: (1) to investigate unsupervised change detection, (2) to investigate reduction of the speckle effect and (3) to investigate spatio-contextual change detection. Beijing and Shanghai, the largest cities in China, were selected as study areas. Multitemporal SAR images acquired by ERS-2 SAR (1998~1999) and Envisat ASAR (2008~2009) sensors were used to detect changes that have occurred in these cities. Unsupervised change detection using SAR images is investigated using the Kittler-Illingworth algorithm. The problem associated with the diversity of urban changes—namely, more than one typology of change—is addressed using the modified ratio operator. This operator clusters both positive and negative changes on one side of the change-image histogram. To model the statistics of the changed and the unchanged classes, four different probability density functions were tested. The analysis indicates that the quality of the resulting change map will strongly depends on the density model chosen. The analysis also suggests that use of a local adaptive filter (e.g., enhanced Lee) removes fine geometric details from the scene. Speckle suppression and geometric detail preservation in SAR-based change detection, are addressed using the nonlocal means (NLM) algorithm. In this algorithm, denoising is achieved through a weighted averaging process, in which the weights are a function of the similarity of small image patches defined around each pixel in the image. To decrease the computational complexity, the PCA technique is used to reduce the dimensionality of the neighbourhood feature vectors. Simple methods to estimate the dimensionality of the new space and the required noise variance are proposed. The experimental results show that the NLM algorithm outperformed traditional local adaptive filters (e.g., enhanced Lee) in eliminating the effect of speckle and in maintaining the geometric structures in the scene. The analysis also indicates that filtering the change variable instead of the individual SAR images is effective in terms of both the quality of the results and the time needed to carry out the computation. The third research focuses on the application of Markov random field (MRF) in change detection using SAR images. The MRF-based change detection algorithm shows limited capacity to simultaneously maintain fine geometric detail in urban areas and combat the effect of speckle noise. This problem has been addressed through the introduction of a global constraint on the pixels’ class labels. Based on NLM theory, a global probability model is developed. The iterated conditional mode (ICM) scheme for the optimization of the MAP-MRF criterion function is extended to include a step that forces the maximization of the global probability model. The experimental results show that the proposed algorithm is better at preserving the fine structural detail, effective in reducing the effect of speckle, less sensitive to the value of the contextual parameter, and less affected by the quality of the initial change map compared with traditional MRF-based change detection algorithm.

QC 20130610

APA, Harvard, Vancouver, ISO, and other styles
14

Mairal, Julien. "Sparse coding for machine learning, image processing and computer vision." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00595312.

Full text
Abstract:
We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
APA, Harvard, Vancouver, ISO, and other styles
15

Yousif, Osama. "Urban Change Detection Using Multitemporal SAR Images." Doctoral thesis, KTH, Geoinformatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168216.

Full text
Abstract:
Multitemporal SAR images have been increasingly used for the detection of different types of environmental changes. The detection of urban changes using SAR images is complicated due to the complex mixture of the urban environment and the special characteristics of SAR images, for example, the existence of speckle. This thesis investigates urban change detection using multitemporal SAR images with the following specific objectives: (1) to investigate unsupervised change detection, (2) to investigate effective methods for reduction of the speckle effect in change detection, (3) to investigate spatio-contextual change detection, (4) to investigate object-based unsupervised change detection, and (5) to investigate a new technique for object-based change image generation. Beijing and Shanghai, the largest cities in China, were selected as study areas. Multitemporal SAR images acquired by ERS-2 SAR and ENVISAT ASAR sensors were used for pixel-based change detection. For the object-based approaches, TerraSAR-X images were used. In Paper I, the unsupervised detection of urban change was investigated using the Kittler-Illingworth algorithm. A modified ratio operator that combines positive and negative changes was used to construct the change image. Four density function models were tested and compared. Among them, the log-normal and Nakagami ratio models achieved the best results. Despite the good performance of the algorithm, the obtained results suffer from the loss of fine geometric detail in general. This was a consequence of the use of local adaptive filters for speckle suppression. Paper II addresses this problem using the nonlocal means (NLM) denoising algorithm for speckle suppression and detail preservation. In this algorithm, denoising was achieved through a moving weighted average. The weights are a function of the similarity of small image patches defined around each pixel in the image. To decrease the computational complexity, principle component analysis (PCA) was used to reduce the dimensionality of the neighbourhood feature vectors. Simple methods to estimate the number of significant PCA components to be retained for weights computation and the required noise variance were proposed. The experimental results showed that the NLM algorithm successfully suppressed speckle effects, while preserving fine geometric detail in the scene. The analysis also indicates that filtering the change image instead of the individual SAR images was effective in terms of the quality of the results and the time needed to carry out the computation. The Markov random field (MRF) change detection algorithm showed limited capacity to simultaneously maintain fine geometric detail in urban areas and combat the effect of speckle. To overcome this problem, Paper III utilizes the NLM theory to define a nonlocal constraint on pixels class-labels. The iterated conditional mode (ICM) scheme for the optimization of the MRF criterion function is extended to include a new step that maximizes the nonlocal probability model. Compared with the traditional MRF algorithm, the experimental results showed that the proposed algorithm was superior in preserving fine structural detail, effective in reducing the effect of speckle, less sensitive to the value of the contextual parameter, and less affected by the quality of the initial change map. Paper IV investigates object-based unsupervised change detection using very high resolution TerraSAR-X images over urban areas. Three algorithms, i.e., Kittler-Illingworth, Otsu, and outlier detection, were tested and compared. The multitemporal images were segmented using multidate segmentation strategy. The analysis reveals that the three algorithms achieved similar accuracies. The achieved accuracies were very close to the maximum possible, given the modified ratio image as an input. This maximum, however, was not very high. This was attributed, partially, to the low capacity of the modified ratio image to accentuate the difference between changed and unchanged areas. Consequently, Paper V proposes a new object-based change image generation technique. The strong intensity variations associated with high resolution and speckle effects render object mean intensity unreliable feature. The modified ratio image is, therefore, less efficient in emphasizing the contrast between the classes. An alternative representation of the change data was proposed. To measure the intensity of change at the object in isolation of disturbances caused by strong intensity variations and speckle effects, two techniques based on the Fourier transform and the Wavelet transform of the change signal were developed. Qualitative and quantitative analyses of the result show that improved change detection accuracies can be obtained by classifying the proposed change variables.

QC 20150529

APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Fangwei. "Multiresolution analysis of ultrasound images of the prostate." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0028.

Full text
Abstract:
[Truncated abstract] Transrectal ultrasound (TRUS) has become the urologist’s primary tool for diagnosing and staging prostate cancer due to its real-time and non-invasive nature, low cost, and minimal discomfort. However, the interpretation of a prostate ultrasound image depends critically on the experience and expertise of a urologist and is still difficult and subjective. To overcome the subjective interpretation and facilitate objective diagnosis, computer aided analysis of ultrasound images of the prostate would be very helpful. Computer aided analysis of images may improve diagnostic accuracy by providing a more reproducible interpretation of the images. This thesis is an attempt to address several key elements of computer aided analysis of ultrasound images of the prostate. Specifically, it addresses the following tasks: 1. modelling B-mode ultrasound image formation and statistical properties; 2. reducing ultrasound speckle; and 3. extracting prostate contour. Speckle refers to the granular appearance that compromises the image quality and resolution in optics, synthetic aperture radar (SAR), and ultrasound. Due to the existence of speckle the appearance of a B-mode ultrasound image does not necessarily relate to the internal structure of the object being scanned. A computer simulation of B-mode ultrasound imaging is presented, which not only provides an insight into the nature of speckle, but also a viable test-bed for any ultrasound speckle reduction methods. Motivated by analysis of the statistical properties of the simulated images, the generalised Fisher-Tippett distribution is empirically proposed to analyse statistical properties of ultrasound images of the prostate. A speckle reduction scheme is then presented, which is based on Mallat and Zhong’s dyadic wavelet transform (MZDWT) and modelling statistical properties of the wavelet coefficients and exploiting their inter-scale correlation. Specifically, the squared modulus of the component wavelet coefficients are modelled as a two-state Gamma mixture. Interscale correlation is exploited by taking the harmonic mean of the posterior probability functions, which are derived from the Gamma mixture. This noise reduction scheme is applied to both simulated and real ultrasound images, and its performance is quite satisfactory in that the important features of the original noise corrupted image are preserved while most of the speckle noise is removed successfully. It is also evaluated both qualitatively and quantitatively by comparing it with median, Wiener, and Lee filters, and the results revealed that it surpasses all these filters. A novel contour extraction scheme (CES), which fuses MZDWT and snakes, is proposed on the basis of multiresolution analysis (MRA). Extraction of the prostate contour is placed in a multi-scale framework provided by MZDWT. Specifically, the external potential functions of the snake are designated as the modulus of the wavelet coefficients at different scales, and thus are “switchable”. Such a multi-scale snake, which deforms and migrates from coarse to fine scales, eventually extracts the contour of the prostate
APA, Harvard, Vancouver, ISO, and other styles
17

Ghimpeteanu, Gabriela. "Several approaches to improve noise removal in photographic images." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/461012.

Full text
Abstract:
Noise acquisition is an unavoidable component when capturing photographs, even in the case of current state of the art cameras. This problem is even accentuated when the lighting conditions are not ideal. Therefore, removing the noise present in the captured image is still an essential task in the camera image processing pipeline. In this thesis, we analyze several approaches to improve current image denoising meth- ods. First, we propose a general framework that can improve a denoising method, moti- vated by a simple principle: for any algorithm, the smaller the noise level, the higher the quality of the denoised image. Therefore, by carefully choosing an image decomposition of the noisy image into less noisy one(s) and applying the algorithm on the latter, the performance of any denoising method can increase. Second, we accentuate the importance of using a realistic noise model for testing any denoising methods, as in the usual AWG scenario the results can be extremely di erent. The noise model can be estimated on RAW images, as the camera processing pipeline alters the noise, and denoising becomes a challenge when applied on camera output. We show how a local method applied on RAW can outperform a non-local one applied on camera output, in the realistic noise scenario. Finally, in this thesis we propose a fast, local denoising method where the Euclidean curvature of the noisy image is approximated in a regularizing manner and a clean image is reconstructed from this smoothed curvature. User preference tests show that when denoising real photographs with actual noise our method produces results with the same visual quality as the more sophisticated, non-local algorithms, but at a fraction of their computational cost. These tests also highlight the limitations of objective image quality metrics like PSNR and SSIM, which correlate poorly with user preference.
L'adquisició de soroll és un component ineludible quan capturem una fotografi a, fins i tot en el cas de les càmeres d'última generació. Aquest problema s'accentua encara més quan les condicions d'illuminació no són ideals. Per tant, l'extracció del soroll que està present a la imatge capturada continua sent una tasca essencial dintre del processament d'imatges de la càmera. En aquesta tesi, analitzem diversos enfocaments per millorar els mètodes actuals d'extracció de soroll. En primer lloc, proposem un marc general que permet millorar un mètode d'extracció. Aquest marc està motivat per un principi senzill: per a qualsevol algoritme, com més petit sigui el nivell de soroll a l'imatge original, més alta serà la qualitat de la imatge de sortida. Per tant, escollint acuradament una descomposició de la imatge sorollosa en una altra amb menys soroll i aplicant l'algoritme en aquesta última, podem augmentar el rendiment de qualsevol mètode d'extracció de soroll. En segon lloc, remarquem la importància d'utilitzar un model de soroll realista per a evaluar qualsevol mètode d'extracció de soroll, ja que els resultats en imatges realistes poden divergir enormement en comparació amb l'escenari habitual de suposar AWG. Amb aquest , estimem un model de soroll en imatges RAW, ja que el processament de l'imatge dintre de la càmera altera el soroll, i l'extracció de soroll es converteix en un desa fiament al no seguir el model AWG. Mostrem que, quan suposem un model de soroll realista, un mètode local aplicat a RAW pot superar un de no-local aplicat a la sortida de la càmera. Finalment, en aquesta tesi proposem un mètode ràpid i local d'extracció de soroll on la curvatura euclidiana de la imatge sorollosa s'aproxima de manera regularitzadora i es reconstrueix una imatge neta d'aquesta curvatura suavitzada. Les proves de preferència dels usuaris mostren que el nostre mètode produeix resultats amb la mateixa qualitat visual que els algorismes més sofi sticats i no-locals, però amb una fracció del seu cost computacional. Aquestes proves també posen de relleu les limitacions de mètriques de qualitat d'imatge objectives com PSNR i SSIM, que es correlacionen malament amb la preferència dels usuaris.
APA, Harvard, Vancouver, ISO, and other styles
18

Moebel, Emmanuel. "New strategies for the identification and enumeration of macromolecules in 3D images of cryo electron tomography." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S007/document.

Full text
Abstract:
La cryo-tomographie électronique (cryo-ET) est une technique d'imagerie capable de produire des vues 3D de spécimens biologiques. Cette technologie permet d’imager de larges portions de cellules vitrifiées à une résolution nanométrique. Elle permet de combiner plusieurs échelles de compréhension de la machinerie cellulaire, allant des interactions entre les groupes de protéines à leur structure atomique. La cryo-ET a donc le potentiel d'agir comme un lien entre l'imagerie cellulaire in vivo et les techniques atteignant la résolution atomique. Cependant, ces images sont corrompues par un niveau de bruit élevé et d'artefacts d'imagerie. Leur interprétabilité dépend fortement des méthodes de traitement d'image. Les méthodes computationelles existantes permettent actuellement d'identifier de larges macromolécules telles que les ribosomes, mais il est avéré que ces détections sont incomplètes. De plus, ces méthodes sont limitées lorsque les objets recherchés sont de très petite taille ou présentent une plus grande variabilité structurelle. L'objectif de cette thèse est de proposer de nouvelles méthodes d'analyse d'images, afin de permettre une identification plus robuste des macromolécules d'intérêt. Nous proposons deux méthodes computationelles pour atteindre cet objectif. La première vise à réduire le bruit et les artefacts d'imagerie, et fonctionne en ajoutant et en supprimant de façon itérative un bruit artificiel à l'image. Nous fournissons des preuves mathématiques et expérimentales de ce concept qui permet d'améliorer le signal dans les images de cryo-ET. La deuxième méthode s'appuie sur les progrès récents de l'apprentissage automatique et les méthodes convolutionelles pour améliorer la localisation des macromolécules. La méthode est basée sur un réseau neuronal convolutif et nous montrons comment l'adapter pour obtenir des taux de détection supérieur à l'état de l'art
Cryo electron tomography (cryo-ET) is an imaging technique capable of producing 3D views of biological specimens. This technology enables to capture large field of views of vitrified cells at nanometer resolution. These features allow to combine several scales of understanding of the cellular machinery, from the interactions between groups of proteins to their atomic structure. Cryo-ET therefore has the potential to act as a link between in vivo cell imaging and atomic resolution techniques. However, cryo-ET images suffer from a high amount of noise and imaging artifacts, and the interpretability of these images heavily depends on computational image analysis methods. Existing methods allow to identify large macromolecules such as ribosomes, but there is evidence that the detections are incomplete. In addition, these methods are limited when searched objects are smaller and have more structural variability. The purpose of this thesis is to propose new image analysis methods, in order to enable a more robust identification of macromolecules of interest. We propose two computational methods to achieve this goal. The first aims at reducing the noise and imaging artifacts, and operates by iteratively adding and removing artificial noise to the image. We provide both mathematical and experimental evidence that this concept allows to enhance signal in cryo-ET images. The second method builds on recent advances in machine learning to improve macromolecule localization. The method is based on a convolutional neural network, and we show how it can be adapted to achieve better detection rates than the current state-of- the-art
APA, Harvard, Vancouver, ISO, and other styles
19

Müller, Jan-Steffen [Verfasser], and Martin [Akademischer Betreuer] Fuchs. "Regularity aspects of a higher-order variational approach to the denoising and inpainting of images with TV-type energies / Jan-Steffen Müller ; Betreuer: Martin Fuchs." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2018. http://d-nb.info/1155164784/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Müller, Jan-Steffen Verfasser], and Martin [Akademischer Betreuer] [Fuchs. "Regularity aspects of a higher-order variational approach to the denoising and inpainting of images with TV-type energies / Jan-Steffen Müller ; Betreuer: Martin Fuchs." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2018. http://d-nb.info/1155164784/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Sutour, Camille. "Vision nocturne numérique : restauration automatique et recalage multimodal des images à bas niveau de lumière." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0099/document.

Full text
Abstract:
La vision de nuit des pilotes d’hélicoptère est artificiellement assistée par un dispositif de vision bas niveau de lumière constitué d’un intensificateur de lumière (IL) couplé à une caméra numérique d’une part, et d’une caméra infrarouge (IR) d’autre part. L’objectif de cette thèse est d’améliorer ce dispositif en ciblant les défauts afin de les corriger.Une première partie consiste à réduire le bruit dont souffrent les images IL. Cela nécessite d’évaluer la nature du bruit qui corrompt ces images. Pour cela, une méthode d’estimation automatique du bruit est mise en place. L’estimation repose sur la détection non paramétrique de zones homogènes de l’image. Les statistiques du bruit peuvent être alors être estimées à partir de ces régions homogènes à l’aide d’une méthode d’estimation robuste de la fonction de niveau de bruit par minimisation l1.Grâce à l’estimation du bruit, les images IL peuvent alors débruitées. Nous avons pour cela développé dans la seconde partie un algorithme de débruitage d’images qui associe les moyennes non locales aux méthodes variationnelles en effectuant une régularisation adaptative pondérée parune attache aux données non locale. Une adaptation au débruitage de séquences d’images permet ensuite de tenir compte de la redondance d’information apportée par le flux vidéo, en garantissant stabilité temporelle et préservation des structures fines.Enfin, dans la troisième partie les informations issues des capteurs optique et infrarouge sont recalées dans un même référentiel. Nous proposons pour cela un critère de recalage multimodal basé sur l’alignement des contours des images. Combiné à une résolution par montée de gradient et à un schéma temporel, l’approche proposée permet de recaler de façon robuste les deuxmodalités, en vue d’une ultérieure fusion
Night vision for helicopter pilots is artificially enhanced by a night vision system. It consists in a light intensifier (LI) coupled with a numerical camera, and an infrared camera. The goal of this thesis is to improve this device by analyzing the defaults in order to correct them.The first part consists in reducing the noise level on the LI images. This requires to evaluate the nature of the noise corrupting these images, so an automatic noise estimation method has been developed. The estimation is based on a non parametric detection of homogeneous areas.Then the noise statistics are estimated using these homogeneous regions by performing a robust l`1 estimation of the noise level function.The LI images can then be denoised using the noise estimation. We have developed in the second part a denoising algorithm that combines the non local means with variational methods by applying an adaptive regularization weighted by a non local data fidelity term. Then this algorithm is adapted to video denoising using the redundancy provided by the sequences, hence guaranteeing temporel stability and preservation of the fine structures.Finally, in the third part data from the optical and infrared sensors are registered. We propose an edge based multimodal registration metric. Combined with a gradient ascent resolution and a temporel scheme, the proposed method allows robust registration of the two modalities for later fusion
APA, Harvard, Vancouver, ISO, and other styles
22

Malek, Mohamed. "Extension de l'analyse multi-résolution aux images couleurs par transformées sur graphes." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2304/document.

Full text
Abstract:
Dans ce manuscrit, nous avons étudié l’extension de l’analyse multi-résolution aux images couleurs par des transformées sur graphe. Dans ce cadre, nous avons déployé trois stratégies d’analyse différentes. En premier lieu, nous avons défini une transformée basée sur l’utilisation d’un graphe perceptuel dans l’analyse à travers la transformé en ondelettes spectrale sur graphe. L’application en débruitage d’image met en évidence l’utilisation du SVH dans l’analyse des images couleurs. La deuxième stratégie consiste à proposer une nouvelle méthode d’inpainting pour des images couleurs. Pour cela, nous avons proposé un schéma de régularisation à travers les coefficients d’ondelettes de la TOSG, l’estimation de la structure manquante se fait par la construction d’un graphe des patchs couleurs à partir des moyenne non locales. Les résultats obtenus sont très encourageants et mettent en évidence l’importance de la prise en compte du SVH. Dans la troisième stratégie, nous proposons une nouvelleapproche de décomposition d’un signal défini sur un graphe complet. Cette méthode est basée sur l’utilisation des propriétés de la matrice laplacienne associée au graphe complet. Dans le contexte des images couleurs, la prise en compte de la dimension couleur est indispensable pour pouvoir identifier les singularités liées à l’image. Cette dernière offre de nouvelles perspectives pour une étude approfondie de son comportement
In our work, we studied the extension of the multi-resolution analysis for color images by using transforms on graphs. In this context, we deployed three different strategies of analysis. Our first approach consists of computing the graph of an image using the psychovisual information and analyzing it by using the spectral graph wavelet transform. We thus have defined a wavelet transform based on a graph with perceptual information by using the CIELab color distance. Results in image restoration highlight the interest of the appropriate use of color information. In the second strategy, we propose a novel recovery algorithm for image inpainting represented in the graph domain. Motivated by the efficiency of the wavelet regularization schemes and the success of the nonlocal means methods we construct an algorithm based on the recovery of information in the graph wavelet domain. At each step the damaged structure are estimated by computing the non local graph then we apply the graph wavelet regularization model using the SGWT coefficient. The results are very encouraging and highlight the use of the perceptual informations. In the last strategy, we propose a new approach of decomposition for signals defined on a complete graphs. This method is based on the exploitation of of the laplacian matrix proprieties of the complete graph. In the context of image processing, the use of the color distance is essential to identify the specificities of the color image. This approach opens new perspectives for an in-depth study of its behavior
APA, Harvard, Vancouver, ISO, and other styles
23

Tran, Dai viet. "Patch-based Bayesian approaches for image restoration." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD049.

Full text
Abstract:
Les travaux présentés dans cette thèse concernent les approches bayésiennes par patchs des problèmes d’amélioration de la qualité d’images. Notre contribution réside en le choix du dictionnaire construit grâce à un ensemble d’images de haute qualité et en la définition et l’utilisation d’un modèle à priori pour la distribution des patchs dans l’espace du dictionnaire. Nous avons montré qu’un choix attentif du dictionnaire représentant les informations locales des images permettait une amélioration de la qualité des images dégradées. Plus précisément, d’un dictionnaire construit de façon exhaustive sur les images de haute qualité nous avons sélectionné, pour chaque patch de l’image dégradée, un sous dictionnaire fait de ses voisins les plus proches. La similarité entre les patchs a été mesurée grâce à l’utilisation de la distance du cantonnier (Earth Mover’s Distance) entre les distributions des intensités de ces patchs. L’algorithme de super résolution présenté a conduit à de meilleurs résultats que les algorithmes les plus connus. Pour les problèmes de débruitage d’images nous nous sommes intéressés à la distribution à priori des patchs dans l’espace du dictionnaire afin de l’utiliser comme pré requis pour régulariser le problème d’optimisation donné par le Maximum à Posteriori. Dans le cas d’un dictionnaire de petite dimension, nous avons proposé une distribution constante par morceaux. Pour les dictionnaires de grande dimension, la distribution à priori a été recherchée comme un mélange de gaussiennes (GMM). Nous avons finalement justifié le nombre de gaussiennes utiles pour une bonne reconstruction apportant ainsi un nouvel éclairage sur l’utilisation des GMM
In this thesis, we investigate the patch-based image denoising and super-resolution under the Bayesian Maximum A Posteriori framework, with the help of a set of high quality images which are known as standard images. Our contributions are to address the construction of the dictionary, which is used to represent image patches, and the prior distribution in dictionary space. We have demonstrated that the careful selection of dictionary to represent the local information of image can improve the image reconstruction. By establishing an exhaustive dictionary from the standard images, our main attribute is to locally select a sub-dictionary of matched patches to recover each patch in the degraded image. Beside the conventional Euclidean measure, we propose an effective similarity metric based on the Earth Mover's Distance (EMD) for image patch-selection by considering each patch as a distribution of image intensities. Our EMD-based super-resolution algorithm has outperformed comparing to some state-of-the-art super-resolution methods.To enhance the quality of image denoising, we exploit the distribution of patches in the dictionary space as a an image prior to regularize the optimization problem. We develop a computationally efficient procedure, based on piece-wise constant function estimation, for low dimension dictionaries and then proposed a Gaussian Mixture Model (GMM) for higher complexity dictionary spaces. Finally, we justify the practical number of Gaussian components required for recovering patches. Our researches on multiple datasets with combination of different dictionaries and GMM models have complemented the lack of evidence of using GMM in the literature
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Jiachao. "Image denoising for real image sensors." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1437954286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Irrera, Paolo. "Traitement d'images de radiographie à faible dose : Débruitage et rehaussement de contraste conjoints et détection automatique de points de repère anatomiques pour l'estimation de la qualité des images." Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0031/document.

Full text
Abstract:
Nos travaux portent sur la réduction de la dose de rayonnement lors d'examens réalisés avec le Système de radiologie EOS. Deux approches complémentaires sont étudiées. Dans un premier temps, nous proposons une méthode de débruitage et de rehaussement de contraste conjoints pour optimiser le compromis entre la qualité des images et la dose de rayons X. Nous étendons le filtre à moyennes non locales pour restaurer les images EOS. Nous étudions ensuite comment combiner ce filtre à une méthode de rehaussement de contraste multi-échelles. La qualité des images cliniques est optimisée grâce à des fonctions limitant l'augmentation du bruit selon la quantité d’information locale redondante captée par le filtre. Dans un deuxième temps, nous estimons des indices d’exposition (EI) sur les images EOS afin de donner aux utilisateurs un retour immédiat sur la qualité de l'image acquise. Nous proposons ainsi une méthode reposant sur la détection de points de repère qui, grâce à l'exploitation de la redondance de mesures locales, est plus robuste à la présence de données aberrantes que les méthodes existantes. En conclusion, la méthode de débruitage et de rehaussement de contraste conjoints donne des meilleurs résultats que ceux obtenus par un algorithme exploité en routine clinique. La qualité des images EOS peut être quantifiée de manière robuste par des indices calculés automatiquement. Étant donnée la cohérence des mesures sur des images de pré-affichage, ces indices pourraient être utilisés en entrée d'un système de gestion automatique des expositions
We aim at reducing the ALARA (As Low As Reasonably Achievable) dose limits for images acquired with EOS full-body system by means of image processing techniques. Two complementary approaches are studied. First, we define a post-processing method that optimizes the trade-off between acquired image quality and X-ray dose. The Non-Local means filter is extended to restore EOS images. We then study how to combine it with a multi-scale contrast enhancement technique. The image quality for the diagnosis is optimized by defining non-parametric noise containment maps that limit the increase of noise depending on the amount of local redundant information captured by the filter. Secondly, we estimate exposure index (EI) values on EOS images which give an immediate feedback on image quality to help radiographers to verify the correct exposure level of the X-ray examination. We propose a landmark detection based approach that is more robust to potential outliers than existing methods as it exploits the redundancy of local estimates. Finally, the proposed joint denoising and contrast enhancement technique significantly increases the image quality with respect to an algorithm used in clinical routine. Robust image quality indicators can be automatically associated with clinical EOS images. Given the consistency of the measures assessed on preview images, these indices could be used to drive an exposure management system in charge of defining the optimal radiation exposure
APA, Harvard, Vancouver, ISO, and other styles
26

Casaca, Wallace Correa de Oliveira [UNESP]. "Restauração de imagens digitais com texturas utilizando técnicas de decomposição e equações diferenciais parciais." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/94247.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:26:56Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-02-25Bitstream added on 2014-06-13T19:06:36Z : No. of bitstreams: 1 casaca_wco_me_sjrp.pdf: 5215634 bytes, checksum: 291e2a21fdb4d46a11de22f18cc97f93 (MD5)
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Neste trabalho propomos quatro novas abordagens para tratar o problema de restauração de imagens reais contendo texturas sob a perspectiva dos temas: reconstrução de regiões danificadas, remoção de objetos, e eliminação de ruídos. As duas primeiras abor dagens são designadas para recompor partes perdias ou remover objetos de uma imagem real a partir de formulações envolvendo decomposiçãode imagens e inpainting por exem- plar, enquanto que as duas últimas são empregadas para remover ruído, cujas formulações são baseadas em decomposição de três termos e equações diferenciais parciais não lineares. Resultados experimentais atestam a boa performace dos protótipos apresentados quando comparados à modelagens correlatas da literatura.
In this paper we propose four new approaches to address the problem of restoration of real images containing textures from the perspective of reconstruction of damaged areas, object removal, and denoising topics. The first two approaches are designed to reconstruct missing parts or to remove objects of a real image using formulations based on image de composition and exemplar based inpainting, while the last two other approaches are used to remove noise, whose formulations are based on decomposition of three terms and non- linear partial di®erential equations. Experimental results attest to the good performance of the presented prototypes when compared to modeling related in literature.
APA, Harvard, Vancouver, ISO, and other styles
27

Ghazel, Mohsen. "Adaptive Fractal and Wavelet Image Denoising." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/882.

Full text
Abstract:
The need for image enhancement and restoration is encountered in many practical applications. For instance, distortion due to additive white Gaussian noise (AWGN) can be caused by poor quality image acquisition, images observed in a noisy environment or noise inherent in communication channels. In this thesis, image denoising is investigated. After reviewing standard image denoising methods as applied in the spatial, frequency and wavelet domains of the noisy image, the thesis embarks on the endeavor of developing and experimenting with new image denoising methods based on fractal and wavelet transforms. In particular, three new image denoising methods are proposed: context-based wavelet thresholding, predictive fractal image denoising and fractal-wavelet image denoising. The proposed context-based thresholding strategy adopts localized hard and soft thresholding operators which take in consideration the content of an immediate neighborhood of a wavelet coefficient before thresholding it. The two fractal-based predictive schemes are based on a simple yet effective algorithm for estimating the fractal code of the original noise-free image from the noisy one. From this predicted code, one can then reconstruct a fractally denoised estimate of the original image. This fractal-based denoising algorithm can be applied in the pixel and the wavelet domains of the noisy image using standard fractal and fractal-wavelet schemes, respectively. Furthermore, the cycle spinning idea was implemented in order to enhance the quality of the fractally denoised estimates. Experimental results show that the proposed image denoising methods are competitive, or sometimes even compare favorably with the existing image denoising techniques reviewed in the thesis. This work broadens the application scope of fractal transforms, which have been used mainly for image coding and compression purposes.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Zhi. "Variational image segmentation, inpainting and denoising." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/292.

Full text
Abstract:
Variational methods have attracted much attention in the past decade. With rigorous mathematical analysis and computational methods, variational minimization models can handle many practical problems arising in image processing, such as image segmentation and image restoration. We propose a two-stage image segmentation approach for color images, in the first stage, the primal-dual algorithm is applied to efficiently solve the proposed minimization problem for a smoothed image solution without irrelevant and trivial information, then in the second stage, we adopt the hillclimbing procedure to segment the smoothed image. For multiplicative noise removal, we employ a difference of convex algorithm to solve the non-convex AA model. And we also improve the non-local total variation model. More precisely, we add an extra term to impose regularity to the graph formed by the weights between pixels. Thin structures can benefit from this regularization term, because it allows to adapt the weights value from the global point of view, thus thin features will not be overlooked like in the conventional non-local models. Since now the non-local total variation term has two variables, the image u and weights v, and it is concave with respect to v, the proximal alternating linearized minimization algorithm is naturally applied with variable metrics to solve the non-convex model efficiently. In the meantime, the efficiency of the proposed approaches is demonstrated on problems including image segmentation, image inpainting and image denoising.
APA, Harvard, Vancouver, ISO, and other styles
29

Danda, Swetha. "Generalized diffusion model for image denoising." Morgantown, W. Va. : [West Virginia University Libraries], 2007. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5481.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2007.
Title from document title page. Document formatted into pages; contains viii, 62 p. : ill. Includes abstract. Includes bibliographical references (p. 59-62).
APA, Harvard, Vancouver, ISO, and other styles
30

Niu, Pei. "Multi-energy image reconstruction in spectral photon-counting CT." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI022.

Full text
Abstract:
Le scanner CT spectral à comptage de photons (sCT) est apparu récemment comme une nouvelle technique d'imagerie présentant des avantages fondamentaux par rapport au scanner CT classique et au scanner CT à double énergie. Cependant, en raison du nombre réduit de photons dans chaque bande d'énergie du scanner sCT et des artéfacts divers, la reconstruction des images devient particulièrement difficile. Cette thèse se concentre sur la reconstruction d'images multi-énergie en sCT. Tout d'abord, nous proposons d'étudier la capacité du scanner sCT à réaliser simultanément une imagerie anatomique (aCT) et fonctionnelle (fCT) en une seule acquisition par reconstruction et décomposition des matériaux. La fonction aCT du scanner sCT est étudiée dans la même configuration que celle du scanner CT classique, et la fonction fCT du scanner sCT est étudiée en appliquant des algorithmes de décomposition de matériaux aux mêmes données multi-énergie. Ensuite, comme le bruit est un problème particulièrement aigu en raison du nombre largement réduit de photons dans chaque bande d'énergie du scanner sCT, nous introduisons un mécanisme de débruitage dans la reconstruction de l'image pour effectuer simultanément un débruitage et une reconstruction. Enfin, pour améliorer la reconstruction de l'image, nous proposons de reconstruire l'image à une bande d'énergie donnée en exploitant les informations dans toutes les autres bandes d'énergie. La stratégie clé de cette approche consiste à regrouper les pixels similaires issus de la reconstruction de toutes les bandes d'énergie en une seule classe, à les ajuster dans la même classe, à projeter les résultats de l'ajustement dans chaque bande d'énergie, et à débruiter les informations projetées. Elle est utilisée à la fois comme une opération post-débruitage pour démontrer son efficacité et comme un terme de régularisation ou un terme de régularisations combinées pour la réalisation simultanée du débruitage et de la reconstruction. Toutes les méthodes ci-dessus sont évaluées sur des données de simulation et des données réelles provenant d'un scanner sCT préclinique
Spectral photon-counting CT (sCT) appeared recently as a new imaging technique presenting fundamental advantages with respect to conventional CT and duel-energy CT. However, due to the reduced number of photons in each energy bin of sCT and various artifacts, image reconstruction becomes particularly difficult. This thesis focuses on the reconstruction of multi-energy images in sCT. First, we propose to consider the ability of sCT to achieve simultaneously both anatomical (aCT) and functional imaging (fCT) in one single acquisition through reconstruction and material decomposition. aCT function of sCT is studied under the same configuration as that of conventional CT, and fCT function of sCT is investigated by applying material decomposition algorithms to the same acquired multi-energy data. Then, since noise is a particularly acute problem due to the largely reduced number of photons in each energy bin of sCT, we introduce denoising mechanism in the image reconstruction to perform simultaneous reconstruction and denoising. Finally, to improve image reconstruction, we propose to reconstruct the image at a given energy bin by exploiting information in all other energy bins. The key strategy in such approach consists of grouping the similar pixels from the reconstruction of all the energy bins into the same class, fitting within each class, mapping the fitting results into each energy bin, and denoising with the mapped information. It is used both as a post-denoising operation to demonstrate its effectiveness and as a regularization term or a combined regularization term for simultaneous reconstruction and denoising. All the above methods are evaluated on both simulation and real data from a pre-clinical sCT system
APA, Harvard, Vancouver, ISO, and other styles
31

Casaca, Wallace Correa de Oliveira. "Restauração de imagens digitais com texturas utilizando técnicas de decomposição e equações diferenciais parciais /." São José do Rio Preto : [s.n.], 2010. http://hdl.handle.net/11449/94247.

Full text
Abstract:
Orientador: Maurílio Boaventura
Banca: Evanildo Castro Silva Júnior
Banca: Alagacone Sri Ranga
Resumo: Neste trabalho propomos quatro novas abordagens para tratar o problema de restauração de imagens reais contendo texturas sob a perspectiva dos temas: reconstrução de regiões danificadas, remoção de objetos, e eliminação de ruídos. As duas primeiras abor dagens são designadas para recompor partes perdias ou remover objetos de uma imagem real a partir de formulações envolvendo decomposiçãode imagens e inpainting por exem- plar, enquanto que as duas últimas são empregadas para remover ruído, cujas formulações são baseadas em decomposição de três termos e equações diferenciais parciais não lineares. Resultados experimentais atestam a boa performace dos protótipos apresentados quando comparados à modelagens correlatas da literatura.
Abstract: In this paper we propose four new approaches to address the problem of restoration of real images containing textures from the perspective of reconstruction of damaged areas, object removal, and denoising topics. The first two approaches are designed to reconstruct missing parts or to remove objects of a real image using formulations based on image de composition and exemplar based inpainting, while the last two other approaches are used to remove noise, whose formulations are based on decomposition of three terms and non- linear partial di®erential equations. Experimental results attest to the good performance of the presented prototypes when compared to modeling related in literature.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
32

Deng, Hao. "Mathematical approaches to digital color image denoising." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31708.

Full text
Abstract:
Thesis (Ph.D)--Mathematics, Georgia Institute of Technology, 2010.
Committee Chair: Haomin Zhou; Committee Member: Luca Dieci; Committee Member: Ronghua Pan; Committee Member: Sung Ha Kang; Committee Member: Yang Wang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
33

Hussain, Israr. "Non-gaussianity based image deblurring and denoising." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sarjanoja, S. (Sampsa). "BM3D image denoising using heterogeneous computing platforms." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201504141380.

Full text
Abstract:
Noise reduction is one of the most fundamental digital image processing problems, and is often designed to be solved at an early stage of the image processing path. Noise appears on the images in many different ways, and it is inevitable. In general, various image processing algorithms perform better if their input is as error-free as possible. In order to keep the processing delays small in different computing platforms, it is important that the noise reduction is performed swiftly. The recent progress in the entertainment industry has led to major improvements in the computing capabilities of graphics cards. Today, graphics circuits consist of several hundreds or even thousands of computing units. Using these computing units for general-purpose computation is possible with OpenCL and CUDA programming interfaces. In applications where the processed data is relatively independent, using parallel computing units may increase the performance significantly. Graphics chips enabled with general-purpose computation capabilities are becoming more common also in mobile devices. In addition, photography has never been as popular as it is nowadays by using mobile devices. This thesis aims to implement the calculation of the state-of-the-art technology used in noise reduction, block-matching and three-dimensional filtering (BM3D), to be executed in heterogeneous computing environments. This study evaluates the performance of the presented implementations by making comparisons with existing implementations. The presented implementations achieve significant benefits from the use of parallel computing devices. At the same time the comparisons illustrate general problems in the utilization of using massively parallel processing for the calculation of complex imaging algorithms
Kohinanpoisto on yksi keskeisimmistä digitaaliseen kuvankäsittelyyn liittyvistä ongelmista, joka useimmiten pyritään ratkaisemaan jo signaalinkäsittelyvuon varhaisessa vaiheessa. Kohinaa ilmestyy kuviin monella eri tavalla ja sen esiintyminen on väistämätöntä. Useat kuvankäsittelyalgoritmit toimivat paremmin, jos niiden syöte on valmiiksi mahdollisimman virheetöntä käsiteltäväksi. Jotta kuvankäsittelyviiveet pysyisivät pieninä eri laskenta-alustoilla, on tärkeää että myös kohinanpoisto suoritetaan nopeasti. Viihdeteollisuuden kehityksen myötä näytönohjaimien laskentateho on moninkertaistunut. Nykyisin näytönohjainpiirit koostuvat useista sadoista tai jopa tuhansista laskentayksiköistä. Näiden laskentayksiköiden käyttäminen yleiskäyttöiseen laskentaan on mahdollista OpenCL- ja CUDA-ohjelmointirajapinnoilla. Rinnakkaislaskenta usealla laskentayksiköllä mahdollistaa suuria suorituskyvyn parannuksia käyttökohteissa, joissa käsiteltävä tieto on toisistaan riippumatonta tai löyhästi riippuvaista. Näytönohjainpiirien käyttö yleisessä laskennassa on yleistymässä myös mobiililaitteissa. Lisäksi valokuvaaminen on nykypäivänä suosituinta juuri mobiililaitteilla. Tämä diplomityö pyrkii selvittämään viimeisimmän kohinanpoistoon käytettävän tekniikan, lohkonsovitus ja kolmiulotteinen suodatus (block-matching and three-dimensional filtering, BM3D), laskennan toteuttamista heterogeenisissä laskentaympäristöissä. Työssä arvioidaan esiteltyjen toteutusten suorituskykyä tekemällä vertailuja jo olemassa oleviin toteutuksiin. Esitellyt toteutukset saavuttavat merkittäviä hyötyjä rinnakkaislaskennan käyttämisestä. Samalla vertailuissa havainnollistetaan yleisiä ongelmakohtia näytönohjainlaskennan hyödyntämisessä monimutkaisten kuvankäsittelyalgoritmien laskentaan
APA, Harvard, Vancouver, ISO, and other styles
35

Houdard, Antoine. "Some advances in patch-based image denoising." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT005/document.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte des méthodes non locales pour le traitement d'images et a pour application principale le débruitage, bien que les méthodes étudiées soient suffisamment génériques pour être applicables à d'autres problèmes inverses en imagerie. Les images naturelles sont constituées de structures redondantes, et cette redondance peut être exploitée à des fins de restauration. Une manière classique d’exploiter cette auto-similarité est de découper l'image en patchs. Ces derniers peuvent ensuite être regroupés, comparés et filtrés ensemble.Dans le premier chapitre, le principe du "global denoising" est reformulé avec le formalisme classique de l'estimation diagonale et son comportement asymptotique est étudié dans le cas oracle. Des conditions précises à la fois sur l'image et sur le filtre global sont introduites pour assurer et quantifier la convergence.Le deuxième chapitre est consacré à l'étude d’a priori gaussiens ou de type mélange de gaussiennes pour le débruitage d'images par patches. Ces a priori sont largement utilisés pour la restauration d'image. Nous proposons ici quelques indices pour répondre aux questions suivantes : Pourquoi ces a priori sont-ils si largement utilisés ? Quelles informations encodent-ils ?Le troisième chapitre propose un modèle probabiliste de mélange pour les patchs bruités, adapté à la grande dimension. Il en résulte un algorithme de débruitage qui atteint les performance de l'état-de-l'art.Le dernier chapitre explore des pistes d'agrégation différentes et propose une écriture de l’étape d'agrégation sous la forme d'un problème de moindre carrés
This thesis studies non-local methods for image processing, and their application to various tasks such as denoising. Natural images contain redundant structures, and this property can be used for restoration purposes. A common way to consider this self-similarity is to separate the image into "patches". These patches can then be grouped, compared and filtered together.In the first chapter, "global denoising" is reframed in the classical formalism of diagonal estimation and its asymptotic behaviour is studied in the oracle case. Precise conditions on both the image and the global filter are introduced to ensure and quantify convergence.The second chapter is dedicated to the study of Gaussian priors for patch-based image denoising. Such priors are widely used for image restoration. We propose some ideas to answer the following questions: Why are Gaussian priors so widely used? What information do they encode about the image?The third chapter proposes a probabilistic high-dimensional mixture model on the noisy patches. This model adopts a sparse modeling which assumes that the data lie on group-specific subspaces of low dimensionalities. This yields a denoising algorithm that demonstrates state-of-the-art performance.The last chapter explores different way of aggregating the patches together. A framework that expresses the patch aggregation in the form of a least squares problem is proposed
APA, Harvard, Vancouver, ISO, and other styles
36

De, Santis Simone. "Quantum Median Filter for Total Variation denoising." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
In this work we present Quantum Median Filter, an image processing algorithm for applying Total Variation denoising to quantum image representations. After a brief introduction to TV model and quantum computing, we present QMF algorithm and discuss its design and efficiency; then we implement and simulate the quantum circuit using Qiskit library; finally we apply it to a set of noisy images, in order to compare and evaluate experimental results.
APA, Harvard, Vancouver, ISO, and other styles
37

Tuncer, Guney. "A Java Toolbox For Wavelet Based Image Denoising." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12608037/index.pdf.

Full text
Abstract:
Wavelet methods for image denoising have became widespread for the last decade. The effectiveness of this denoising scheme is influenced by many factors. Highlights can be listed as choosing of wavelet used, the threshold determination and transform level selection for thresholding. For threshold calculation one of the classical solutions is Wiener filter as a linear estimator. Another one is VisuShrink using global thresholding for nonlinear area. The purpose of this work is to develop a Java toolbox which is used to find best denoising schemes for distinct image types particularly Synthetic Aperture Radar (SAR) images. This can be accomplished by comparing these basic methods with well known data adaptive thresholding methods such as SureShrink, BayeShrink, Generalized Cross Validation and Hypothesis Testing. Some nonwavelet denoising process are also introduced. Along with simple mean and median filters, more statistically adaptive median, Lee, Kuan and Frost filtering techniques are also tested to assist wavelet based denoising scheme. All of these methods on the basis of wavelet models and some traditional methods will be implemented in pure java code using plug-in concept of ImageJ which is a popular image processing tool written in Java.
APA, Harvard, Vancouver, ISO, and other styles
38

Michael, Simon. "A Comparison of Data Transformations in Image Denoising." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375715.

Full text
Abstract:
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.
APA, Harvard, Vancouver, ISO, and other styles
39

Aparnnaa. "Image Denoising and Noise Estimation by Wavelet Transformation." Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1555929391906805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lind, Johan. "Evaluating CNN-based models for unsupervised image denoising." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176092.

Full text
Abstract:
Images are often corrupted by noise which reduces their visual quality and interferes with analysis. Convolutional Neural Networks (CNNs) have become a popular method for denoising images, but their training typically relies on access to thousands of pairs of noisy and clean versions of the same underlying picture. Unsupervised methods lack this requirement and can instead be trained purely using noisy images. This thesis evaluated two different unsupervised denoising algorithms: Noise2Self (N2S) and Parametric Probabilistic Noise2Void (PPN2V), both of which train an internal CNN to denoise images. Four different CNNs were tested in order to investigate how the performance of these algorithms would be affected by different network architectures. The testing used two different datasets: one containing clean images corrupted by synthetic noise, and one containing images damaged by real noise originating from the camera used to capture them. Two of the networks, UNet and a CBAM-augmented UNet resulted in high performance competitive with the strong classical denoisers BM3D and NLM. The other two networks - GRDN and MultiResUNet - on the other hand generally caused poor performance.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Xiaoyang. "Advanced numerical methods for image denoising and segmentation." Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11954/.

Full text
Abstract:
Image denoising is one of the most major steps in current image processing. It is a pre-processing step which aims to remove certain unknown, random noise from an image and obtain an image free of noise for further image processing, such as image segmentation. Image segmentation, as another branch of image processing, plays a significant role in connecting low-level image processing and high-level image processing. Its goal is to segment an image into different parts and extract meaningful information for image analysis and understanding. In recent years, methods based on PDEs and variational functional became very popular in both image denoising and image segmentation. These two branches of methods are presented and investigated in this thesis. In this thesis, several typical methods based on PDE are reviewed and examined. These include the isotropic diffusion model, the anisotropic diffusion model (the P-M model), the fourth-order PDE model (the Y-K model), and the active contour model in image segmentation. Based on the analysis of behaviours of each model, some improvements are proposed. First, a new coefficient is provided for the P-M model to obtain a well-posed model and reduce the “block effect”. Second, a weighted sum operator is used to replace the Laplacian operator in the Y-K model. Such replacement can relieve the creation of the speckles which is brought in by the Y-K model and preserve more details. Third, an adaptive relaxation method with a discontinuity treatment is proposed to improve the numerical solution of the Y-K model. Fourth, an active contour model coupling with the anisotropic diffusion model is proposed to build a noise-resistance segmentation method. Finally, in this thesis, three ways of deriving PDE are developed and summarised. The issue of PSNR is also discussed at the end of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
42

Liao, Zhiwu. "Image denoising using wavelet domain hidden Markov models." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Karam, Christina Maria. "Acceleration of Non-Linear Image Filters, and Multi-Frame Image Denoising." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1575976497271633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Chen. "Blind Full Reference Quality Assessment of Poisson Image Denoising." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398875743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

McGraw, Tim E. "Denoising, segmentation and visualization of diffusion weighted MRI." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Maitree, Rapeepan, Gloria J. Guzman Perez-Carrillo, Joshua S. Shimony, H. Michael Gach, Anupama Chundury, Michael Roach, H. Harold Li, and Deshan Yang. "Adaptive anatomical preservation optimal denoising for radiation therapy daily MRI." SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2017. http://hdl.handle.net/10150/626083.

Full text
Abstract:
Low-field magnetic resonance imaging (MRI) has recently been integrated with radiation therapy systems to provide image guidance for daily cancer radiation treatments. The main benefit of the low-field strength is minimal electron return effects. The main disadvantage of low-field strength is increased image noise compared to diagnostic MRIs conducted at 1.5 T or higher. The increased image noise affects both the discernibility of soft tissues and the accuracy of further image processing tasks for both clinical and research applications, such as tumor tracking, feature analysis, image segmentation, and image registration. An innovative method, adaptive anatomical preservation optimal denoising (AAPOD), was developed for optimal image denoising, i. e., to maximally reduce noise while preserving the tissue boundaries. AAPOD employs a series of adaptive nonlocal mean (ANLM) denoising trials with increasing denoising filter strength (i. e., the block similarity filtering parameter in the ANLM algorithm), and then detects the tissue boundary losses on the differences of sequentially denoised images using a zero-crossing edge detection method. The optimal denoising filter strength per voxel is determined by identifying the denoising filter strength value at which boundary losses start to appear around the voxel. The final denoising result is generated by applying the ANLM denoising method with the optimal per-voxel denoising filter strengths. The experimental results demonstrated that AAPOD was capable of reducing noise adaptively and optimally while avoiding tissue boundary losses. AAPOD is useful for improving the quality of MRIs with low-contrast-to-noise ratios and could be applied to other medical imaging modalities, e.g., computed tomography. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Kai-wah. "Mesh denoising and feature extraction from point cloud data." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Miller, Sarah Victoria. "Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1596444315236623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Kai-wah, and 李啟華. "Mesh denoising and feature extraction from point cloud data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Quan, Jin. "Image Denoising of Gaussian and Poisson Noise Based on Wavelet Thresholding." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1380556846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography