Dissertations / Theses on the topic 'Denoising Images'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Denoising Images.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Rafi, Nazari Mina. "Denoising and Demosaicking of Color Images." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35802.
Full textBjörling, Robin. "Denoising of Infrared Images Using Independent Component Analysis." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4954.
Full textDenna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer.
The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.
Dacke, Fredrik. "Non-local means denoising ofprojection images in cone beamcomputed tomography." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122419.
Full textEn ny kantbevarande brusreduceringsmetod används för att förbättra bildkvaliteten för digital volymtomografi med konstråle. Rekonstruktionsalgoritmen for digital volymtomografi med konstråle som används av Elekta förstärker högfrekventa bilddetaljer, t.ex. brus, och vi föreslår att brusreduceringen genomförs på projektionsbilderna innan de genomgår rekonstruktion. Den brusreducerande metoden visas ha kopplingar till datorintensiv statistik och några matematiska förbättringar av metoden gås igenom. Jämförelser görs med den bästa metoden på både artificiella och fysiska objekt. Resultaten visar att mjukheten i bilderna förbättras på bekostnad av utsmetade bilddetaljer. Vissa resultat visar hur parametersättningen för metoden påverkar avvägningen mellan mjukhet och utsmetade bilddetaljer i bilderna.
Papoutsellis, Evangelos. "First-order gradient regularisation methods for image restoration : reconstruction of tomographic images with thin structures and denoising piecewise affine images." Thesis, University of Cambridge, 2016. https://www.repository.cam.ac.uk/handle/1810/256216.
Full textRoussel, Nicolas. "Denoising of Dual Energy X-ray Absorptiometry Images and Vertebra Segmentation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233845.
Full textDual Energy X-ray Absorptiometry (DXA) är en medicinsk bildbehandlingmodalitetsom används för att kvantifiera bentäthet och upptäckafrakturer. Det används i stor utsträckning tack vare sin låga kostnadoch sin låga exponering, men producerar brusiga bilder som kanvara svåra att förstå för en mänsklig expert eller en maskin. I den här studien undersöker vi avbrusning i DXA i laterala ryggradsbilderoch automatisk segmentering av ryggkotorna i de resulterandebilderna. För avbrusning skapar vi adaptiva filter för att förhindrafrekventa kantartefakter (korskontaminering), och validerar våraresultat med ett observatörsexperiment. Segmentering utförs medanvändning av djupa konvolutionella neuronnät tränade på manuelltsegmenterade DXA-bilder. Med få träningsbilder fokuserar vi pånätverksdjup och mängden träningsdata. På bästa djup rapporterarvi 94% medel-Dice på testbilder utan efterbehandling. Vi undersökerockså tillämpning av ett nätverk tränat på en av våra databaser till enannan databas (annan upplösning). Vi visar att i vissa fall kan korskontamineringförsämra segmenteringsresultatet och att användningenav våra adaptiva filter hjälper till att lösa problemet. Våra resultatvisar att även med få data och korta träningar så producerar neuuronnätkor- rekta segmenteringar. Detta tyder på att de kunde användasför frak- turklassificering. Dock, resultaten bör valideras på större databasermed fler fall av frakturer och andra patologier.
Hua, Yuai, Jianmei Lu, Huayong Zhang, Jinyong Cheng, Wei Liang, and Tianduo Li. "Denoising and Segmentation of MCT Slice Images of Leather Fiber - 170." Verein für Gerberei-Chemie und -Technik e. V, 2019. https://slub.qucosa.de/id/qucosa%3A34310.
Full textNifong, Nathaniel H. "Learning General Features From Images and Audio With Stacked Denoising Autoencoders." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1550.
Full textTrinh, Dinh Hoan. "Denoising and super-resolution for medical images by example-based learning approach." Paris 13, 2013. http://scbd-sto.univ-paris13.fr/secure/edgalilee_th_2013_trinh.pdf.
Full textZhao, Weiying. "Multitemporal SAR images denoising and change detection : applications to Sentinel-1 data." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT003/document.
Full textThe inherent speckle which is attached to any coherent imaging system affects the analysis and interpretation of synthetic aperture radar (SAR) images. To take advantage of well-registered multi-temporal SAR images, we improve the adaptive nonlocal temporal filter with state-of-the-art adaptive denoising methods and propose a patch based adaptive temporal filter. To address the bias problem of the denoising results, we propose a fast and efficient multitemporal despeckling method. The key idea of the proposed approach is the use of the ratio image, provided by the ratio between an image and the temporal mean of the stack. This ratio image is easier to denoise than a single image thanks to its improved stationarity. Besides, temporally stable thin structures are well-preserved thanks to the multi-temporal mean. Without reference image, we propose to use a patch-based auto-covariance residual evaluation method to examine the residual image and look for possible remaining structural contents. With speckle reduction images, we propose to use simplified generalized likelihood ratio method to detect the change area, change magnitude and change times in long series of well-registered images. Based on spectral clustering, we apply the simplified generalized likelihood ratio to detect the time series change types. Then, jet colormap and HSV colorization may be used to vividly visualize the detection results. These methods have been successfully applied to monitor farmland area, urban area, harbor region, and flooding area changes
Briand, Thibaud. "Image Formation from a Large Sequence of RAW Images : performance and accuracy." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1017/document.
Full textThe aim of this thesis is to build a high-quality color image, containing a low level of noise and aliasing, from a large sequence (e.g. hundreds or thousands) of RAW images taken with a consumer camera. This is a challenging issue requiring to perform on the fly demosaicking, denoising and super-resolution. Existing algorithms produce high-quality images but the number of input images is limited by severe computational and memory costs. In this thesis we propose an image fusion algorithm that processes the images sequentially so that the memory cost only depends on the size of the output image. After a preprocessing step, the mosaicked (or CFA) images are aligned in a common system of coordinates using a two-step registration method that we introduce. Then, a color image is computed by accumulation of the irregularly sampled data using classical kernel regression. Finally, the blur introduced is removed by applying the inverse of the corresponding asymptotic equivalent filter (that we introduce).We evaluate the performance and the accuracy of each step of our algorithm on synthetic and real data. We find that for a large sequence of RAW images, our method successfully performs super-resolution and the residual noise decreases as expected. We obtained results similar to those obtained by slower and memory greedy methods. As generating synthetic data requires an interpolation method, we also study in detail the trigonometric polynomial and B-spline interpolation methods. We derive from this study new fine-tuned interpolation methods
Didas, Stephan [Verfasser], and Joachim [Akademischer Betreuer] Weickert. "Denoising and enhancement of digital images : variational methods, integrodifferential equations, and wavelets / Stephan Didas. Betreuer: Joachim Weickert." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/105105673X/34.
Full textNasser, Khalafallah Mahmoud Lamees. "A dictionary-based denoising method toward a robust segmentation of noisy and densely packed nuclei in 3D biological microscopy images." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS283.pdf.
Full textCells are the basic building blocks of all living organisms. All living organisms share life processes such as growth and development, movement, nutrition, excretion, reproduction, respiration and response to the environment. In cell biology research, understanding cells structure and function is essential for developing and testing new drugs. In addition, cell biology research provides a powerful tool to study embryo development. Furthermore, it helps the scientific research community to understand the effects of mutations and various diseases. Time-Lapse Fluorescence Microscopy (TLFM) is one of the most appreciated imaging techniques which can be used in live-cell imaging experiments to quantify various characteristics of cellular processes, i.e., cell survival, proliferation, migration, and differentiation. In TLFM imaging, not only spatial information is acquired, but also temporal information obtained by repeating imaging of a labeled sample at specific time points, as well as spectral information, that produces up to five-dimensional (X, Y, Z + Time + Channel) images. Typically, the generated datasets consist of several (hundreds or thousands) images, each containing hundreds to thousands of objects to be analyzed. To perform high-throughput quantification of cellular processes, nuclei segmentation and tracking should be performed in an automated manner. Nevertheless, nuclei segmentation and tracking are challenging tasks due to embedded noise, intensity inhomogeneity, shape variation as well as a weak boundary of nuclei. Although several nuclei segmentation approaches have been reported in the literature, dealing with embedded noise remains the most challenging part of any segmentation algorithm. We propose a novel 3D denoising algorithm, based on unsupervised dictionary learning and sparse representation, that can both enhance very faint and noisy nuclei, in addition, it simultaneously detects nuclei position accurately. Furthermore, our method is based on a limited number of parameters, with only one being critical, which is the approximate size of the objects of interest. The framework of the proposed method comprises image denoising, nuclei detection, and segmentation. In the denoising step, an initial dictionary is constructed by selecting random patches from the raw image then an iterative technique is implemented to update the dictionary and obtain the final one which is less noisy. Next, a detection map, based on the dictionary coefficients used to denoise the image, is used to detect marker points. Afterward, a thresholding-based approach is proposed to get the segmentation mask. Finally, a marker-controlled watershed approach is used to get the final nuclei segmentation result. We generate 3D synthetic images to study the effect of the few parameters of our method on cell nuclei detection and segmentation, and to understand the overall mechanism for selecting and tuning the significant parameters of the several datasets. These synthetic images have low contrast and low signal to noise ratio. Furthermore, they include touching spheres where these conditions simulate the same characteristics exist in the real datasets. The proposed framework shows that integrating our denoising method along with classical segmentation method works properly in the context of the most challenging cases. To evaluate the performance of the proposed method, two datasets from the cell tracking challenge are extensively tested. Across all datasets, the proposed method achieved very promising results with 96.96% recall for the C.elegans dataset. Besides, in the Drosophila dataset, our method achieved very high recall (99.3%)
Yousif, Osama. "Change Detection Using Multitemporal SAR Images." Licentiate thesis, KTH, Geodesi och geoinformatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123494.
Full textQC 20130610
Mairal, Julien. "Sparse coding for machine learning, image processing and computer vision." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00595312.
Full textYousif, Osama. "Urban Change Detection Using Multitemporal SAR Images." Doctoral thesis, KTH, Geoinformatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168216.
Full textQC 20150529
Zhao, Fangwei. "Multiresolution analysis of ultrasound images of the prostate." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0028.
Full textGhimpeteanu, Gabriela. "Several approaches to improve noise removal in photographic images." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/461012.
Full textL'adquisició de soroll és un component ineludible quan capturem una fotografi a, fins i tot en el cas de les càmeres d'última generació. Aquest problema s'accentua encara més quan les condicions d'illuminació no són ideals. Per tant, l'extracció del soroll que està present a la imatge capturada continua sent una tasca essencial dintre del processament d'imatges de la càmera. En aquesta tesi, analitzem diversos enfocaments per millorar els mètodes actuals d'extracció de soroll. En primer lloc, proposem un marc general que permet millorar un mètode d'extracció. Aquest marc està motivat per un principi senzill: per a qualsevol algoritme, com més petit sigui el nivell de soroll a l'imatge original, més alta serà la qualitat de la imatge de sortida. Per tant, escollint acuradament una descomposició de la imatge sorollosa en una altra amb menys soroll i aplicant l'algoritme en aquesta última, podem augmentar el rendiment de qualsevol mètode d'extracció de soroll. En segon lloc, remarquem la importància d'utilitzar un model de soroll realista per a evaluar qualsevol mètode d'extracció de soroll, ja que els resultats en imatges realistes poden divergir enormement en comparació amb l'escenari habitual de suposar AWG. Amb aquest , estimem un model de soroll en imatges RAW, ja que el processament de l'imatge dintre de la càmera altera el soroll, i l'extracció de soroll es converteix en un desa fiament al no seguir el model AWG. Mostrem que, quan suposem un model de soroll realista, un mètode local aplicat a RAW pot superar un de no-local aplicat a la sortida de la càmera. Finalment, en aquesta tesi proposem un mètode ràpid i local d'extracció de soroll on la curvatura euclidiana de la imatge sorollosa s'aproxima de manera regularitzadora i es reconstrueix una imatge neta d'aquesta curvatura suavitzada. Les proves de preferència dels usuaris mostren que el nostre mètode produeix resultats amb la mateixa qualitat visual que els algorismes més sofi sticats i no-locals, però amb una fracció del seu cost computacional. Aquestes proves també posen de relleu les limitacions de mètriques de qualitat d'imatge objectives com PSNR i SSIM, que es correlacionen malament amb la preferència dels usuaris.
Moebel, Emmanuel. "New strategies for the identification and enumeration of macromolecules in 3D images of cryo electron tomography." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S007/document.
Full textCryo electron tomography (cryo-ET) is an imaging technique capable of producing 3D views of biological specimens. This technology enables to capture large field of views of vitrified cells at nanometer resolution. These features allow to combine several scales of understanding of the cellular machinery, from the interactions between groups of proteins to their atomic structure. Cryo-ET therefore has the potential to act as a link between in vivo cell imaging and atomic resolution techniques. However, cryo-ET images suffer from a high amount of noise and imaging artifacts, and the interpretability of these images heavily depends on computational image analysis methods. Existing methods allow to identify large macromolecules such as ribosomes, but there is evidence that the detections are incomplete. In addition, these methods are limited when searched objects are smaller and have more structural variability. The purpose of this thesis is to propose new image analysis methods, in order to enable a more robust identification of macromolecules of interest. We propose two computational methods to achieve this goal. The first aims at reducing the noise and imaging artifacts, and operates by iteratively adding and removing artificial noise to the image. We provide both mathematical and experimental evidence that this concept allows to enhance signal in cryo-ET images. The second method builds on recent advances in machine learning to improve macromolecule localization. The method is based on a convolutional neural network, and we show how it can be adapted to achieve better detection rates than the current state-of- the-art
Müller, Jan-Steffen [Verfasser], and Martin [Akademischer Betreuer] Fuchs. "Regularity aspects of a higher-order variational approach to the denoising and inpainting of images with TV-type energies / Jan-Steffen Müller ; Betreuer: Martin Fuchs." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2018. http://d-nb.info/1155164784/34.
Full textMüller, Jan-Steffen Verfasser], and Martin [Akademischer Betreuer] [Fuchs. "Regularity aspects of a higher-order variational approach to the denoising and inpainting of images with TV-type energies / Jan-Steffen Müller ; Betreuer: Martin Fuchs." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2018. http://d-nb.info/1155164784/34.
Full textSutour, Camille. "Vision nocturne numérique : restauration automatique et recalage multimodal des images à bas niveau de lumière." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0099/document.
Full textNight vision for helicopter pilots is artificially enhanced by a night vision system. It consists in a light intensifier (LI) coupled with a numerical camera, and an infrared camera. The goal of this thesis is to improve this device by analyzing the defaults in order to correct them.The first part consists in reducing the noise level on the LI images. This requires to evaluate the nature of the noise corrupting these images, so an automatic noise estimation method has been developed. The estimation is based on a non parametric detection of homogeneous areas.Then the noise statistics are estimated using these homogeneous regions by performing a robust l`1 estimation of the noise level function.The LI images can then be denoised using the noise estimation. We have developed in the second part a denoising algorithm that combines the non local means with variational methods by applying an adaptive regularization weighted by a non local data fidelity term. Then this algorithm is adapted to video denoising using the redundancy provided by the sequences, hence guaranteeing temporel stability and preservation of the fine structures.Finally, in the third part data from the optical and infrared sensors are registered. We propose an edge based multimodal registration metric. Combined with a gradient ascent resolution and a temporel scheme, the proposed method allows robust registration of the two modalities for later fusion
Malek, Mohamed. "Extension de l'analyse multi-résolution aux images couleurs par transformées sur graphes." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2304/document.
Full textIn our work, we studied the extension of the multi-resolution analysis for color images by using transforms on graphs. In this context, we deployed three different strategies of analysis. Our first approach consists of computing the graph of an image using the psychovisual information and analyzing it by using the spectral graph wavelet transform. We thus have defined a wavelet transform based on a graph with perceptual information by using the CIELab color distance. Results in image restoration highlight the interest of the appropriate use of color information. In the second strategy, we propose a novel recovery algorithm for image inpainting represented in the graph domain. Motivated by the efficiency of the wavelet regularization schemes and the success of the nonlocal means methods we construct an algorithm based on the recovery of information in the graph wavelet domain. At each step the damaged structure are estimated by computing the non local graph then we apply the graph wavelet regularization model using the SGWT coefficient. The results are very encouraging and highlight the use of the perceptual informations. In the last strategy, we propose a new approach of decomposition for signals defined on a complete graphs. This method is based on the exploitation of of the laplacian matrix proprieties of the complete graph. In the context of image processing, the use of the color distance is essential to identify the specificities of the color image. This approach opens new perspectives for an in-depth study of its behavior
Tran, Dai viet. "Patch-based Bayesian approaches for image restoration." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD049.
Full textIn this thesis, we investigate the patch-based image denoising and super-resolution under the Bayesian Maximum A Posteriori framework, with the help of a set of high quality images which are known as standard images. Our contributions are to address the construction of the dictionary, which is used to represent image patches, and the prior distribution in dictionary space. We have demonstrated that the careful selection of dictionary to represent the local information of image can improve the image reconstruction. By establishing an exhaustive dictionary from the standard images, our main attribute is to locally select a sub-dictionary of matched patches to recover each patch in the degraded image. Beside the conventional Euclidean measure, we propose an effective similarity metric based on the Earth Mover's Distance (EMD) for image patch-selection by considering each patch as a distribution of image intensities. Our EMD-based super-resolution algorithm has outperformed comparing to some state-of-the-art super-resolution methods.To enhance the quality of image denoising, we exploit the distribution of patches in the dictionary space as a an image prior to regularize the optimization problem. We develop a computationally efficient procedure, based on piece-wise constant function estimation, for low dimension dictionaries and then proposed a Gaussian Mixture Model (GMM) for higher complexity dictionary spaces. Finally, we justify the practical number of Gaussian components required for recovering patches. Our researches on multiple datasets with combination of different dictionaries and GMM models have complemented the lack of evidence of using GMM in the literature
Zhang, Jiachao. "Image denoising for real image sensors." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1437954286.
Full textIrrera, Paolo. "Traitement d'images de radiographie à faible dose : Débruitage et rehaussement de contraste conjoints et détection automatique de points de repère anatomiques pour l'estimation de la qualité des images." Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0031/document.
Full textWe aim at reducing the ALARA (As Low As Reasonably Achievable) dose limits for images acquired with EOS full-body system by means of image processing techniques. Two complementary approaches are studied. First, we define a post-processing method that optimizes the trade-off between acquired image quality and X-ray dose. The Non-Local means filter is extended to restore EOS images. We then study how to combine it with a multi-scale contrast enhancement technique. The image quality for the diagnosis is optimized by defining non-parametric noise containment maps that limit the increase of noise depending on the amount of local redundant information captured by the filter. Secondly, we estimate exposure index (EI) values on EOS images which give an immediate feedback on image quality to help radiographers to verify the correct exposure level of the X-ray examination. We propose a landmark detection based approach that is more robust to potential outliers than existing methods as it exploits the redundancy of local estimates. Finally, the proposed joint denoising and contrast enhancement technique significantly increases the image quality with respect to an algorithm used in clinical routine. Robust image quality indicators can be automatically associated with clinical EOS images. Given the consistency of the measures assessed on preview images, these indices could be used to drive an exposure management system in charge of defining the optimal radiation exposure
Casaca, Wallace Correa de Oliveira [UNESP]. "Restauração de imagens digitais com texturas utilizando técnicas de decomposição e equações diferenciais parciais." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/94247.
Full textFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Neste trabalho propomos quatro novas abordagens para tratar o problema de restauração de imagens reais contendo texturas sob a perspectiva dos temas: reconstrução de regiões danificadas, remoção de objetos, e eliminação de ruídos. As duas primeiras abor dagens são designadas para recompor partes perdias ou remover objetos de uma imagem real a partir de formulações envolvendo decomposiçãode imagens e inpainting por exem- plar, enquanto que as duas últimas são empregadas para remover ruído, cujas formulações são baseadas em decomposição de três termos e equações diferenciais parciais não lineares. Resultados experimentais atestam a boa performace dos protótipos apresentados quando comparados à modelagens correlatas da literatura.
In this paper we propose four new approaches to address the problem of restoration of real images containing textures from the perspective of reconstruction of damaged areas, object removal, and denoising topics. The first two approaches are designed to reconstruct missing parts or to remove objects of a real image using formulations based on image de composition and exemplar based inpainting, while the last two other approaches are used to remove noise, whose formulations are based on decomposition of three terms and non- linear partial di®erential equations. Experimental results attest to the good performance of the presented prototypes when compared to modeling related in literature.
Ghazel, Mohsen. "Adaptive Fractal and Wavelet Image Denoising." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/882.
Full textLi, Zhi. "Variational image segmentation, inpainting and denoising." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/292.
Full textDanda, Swetha. "Generalized diffusion model for image denoising." Morgantown, W. Va. : [West Virginia University Libraries], 2007. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5481.
Full textTitle from document title page. Document formatted into pages; contains viii, 62 p. : ill. Includes abstract. Includes bibliographical references (p. 59-62).
Niu, Pei. "Multi-energy image reconstruction in spectral photon-counting CT." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI022.
Full textSpectral photon-counting CT (sCT) appeared recently as a new imaging technique presenting fundamental advantages with respect to conventional CT and duel-energy CT. However, due to the reduced number of photons in each energy bin of sCT and various artifacts, image reconstruction becomes particularly difficult. This thesis focuses on the reconstruction of multi-energy images in sCT. First, we propose to consider the ability of sCT to achieve simultaneously both anatomical (aCT) and functional imaging (fCT) in one single acquisition through reconstruction and material decomposition. aCT function of sCT is studied under the same configuration as that of conventional CT, and fCT function of sCT is investigated by applying material decomposition algorithms to the same acquired multi-energy data. Then, since noise is a particularly acute problem due to the largely reduced number of photons in each energy bin of sCT, we introduce denoising mechanism in the image reconstruction to perform simultaneous reconstruction and denoising. Finally, to improve image reconstruction, we propose to reconstruct the image at a given energy bin by exploiting information in all other energy bins. The key strategy in such approach consists of grouping the similar pixels from the reconstruction of all the energy bins into the same class, fitting within each class, mapping the fitting results into each energy bin, and denoising with the mapped information. It is used both as a post-denoising operation to demonstrate its effectiveness and as a regularization term or a combined regularization term for simultaneous reconstruction and denoising. All the above methods are evaluated on both simulation and real data from a pre-clinical sCT system
Casaca, Wallace Correa de Oliveira. "Restauração de imagens digitais com texturas utilizando técnicas de decomposição e equações diferenciais parciais /." São José do Rio Preto : [s.n.], 2010. http://hdl.handle.net/11449/94247.
Full textBanca: Evanildo Castro Silva Júnior
Banca: Alagacone Sri Ranga
Resumo: Neste trabalho propomos quatro novas abordagens para tratar o problema de restauração de imagens reais contendo texturas sob a perspectiva dos temas: reconstrução de regiões danificadas, remoção de objetos, e eliminação de ruídos. As duas primeiras abor dagens são designadas para recompor partes perdias ou remover objetos de uma imagem real a partir de formulações envolvendo decomposiçãode imagens e inpainting por exem- plar, enquanto que as duas últimas são empregadas para remover ruído, cujas formulações são baseadas em decomposição de três termos e equações diferenciais parciais não lineares. Resultados experimentais atestam a boa performace dos protótipos apresentados quando comparados à modelagens correlatas da literatura.
Abstract: In this paper we propose four new approaches to address the problem of restoration of real images containing textures from the perspective of reconstruction of damaged areas, object removal, and denoising topics. The first two approaches are designed to reconstruct missing parts or to remove objects of a real image using formulations based on image de composition and exemplar based inpainting, while the last two other approaches are used to remove noise, whose formulations are based on decomposition of three terms and non- linear partial di®erential equations. Experimental results attest to the good performance of the presented prototypes when compared to modeling related in literature.
Mestre
Deng, Hao. "Mathematical approaches to digital color image denoising." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31708.
Full textCommittee Chair: Haomin Zhou; Committee Member: Luca Dieci; Committee Member: Ronghua Pan; Committee Member: Sung Ha Kang; Committee Member: Yang Wang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Hussain, Israr. "Non-gaussianity based image deblurring and denoising." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489022.
Full textSarjanoja, S. (Sampsa). "BM3D image denoising using heterogeneous computing platforms." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201504141380.
Full textKohinanpoisto on yksi keskeisimmistä digitaaliseen kuvankäsittelyyn liittyvistä ongelmista, joka useimmiten pyritään ratkaisemaan jo signaalinkäsittelyvuon varhaisessa vaiheessa. Kohinaa ilmestyy kuviin monella eri tavalla ja sen esiintyminen on väistämätöntä. Useat kuvankäsittelyalgoritmit toimivat paremmin, jos niiden syöte on valmiiksi mahdollisimman virheetöntä käsiteltäväksi. Jotta kuvankäsittelyviiveet pysyisivät pieninä eri laskenta-alustoilla, on tärkeää että myös kohinanpoisto suoritetaan nopeasti. Viihdeteollisuuden kehityksen myötä näytönohjaimien laskentateho on moninkertaistunut. Nykyisin näytönohjainpiirit koostuvat useista sadoista tai jopa tuhansista laskentayksiköistä. Näiden laskentayksiköiden käyttäminen yleiskäyttöiseen laskentaan on mahdollista OpenCL- ja CUDA-ohjelmointirajapinnoilla. Rinnakkaislaskenta usealla laskentayksiköllä mahdollistaa suuria suorituskyvyn parannuksia käyttökohteissa, joissa käsiteltävä tieto on toisistaan riippumatonta tai löyhästi riippuvaista. Näytönohjainpiirien käyttö yleisessä laskennassa on yleistymässä myös mobiililaitteissa. Lisäksi valokuvaaminen on nykypäivänä suosituinta juuri mobiililaitteilla. Tämä diplomityö pyrkii selvittämään viimeisimmän kohinanpoistoon käytettävän tekniikan, lohkonsovitus ja kolmiulotteinen suodatus (block-matching and three-dimensional filtering, BM3D), laskennan toteuttamista heterogeenisissä laskentaympäristöissä. Työssä arvioidaan esiteltyjen toteutusten suorituskykyä tekemällä vertailuja jo olemassa oleviin toteutuksiin. Esitellyt toteutukset saavuttavat merkittäviä hyötyjä rinnakkaislaskennan käyttämisestä. Samalla vertailuissa havainnollistetaan yleisiä ongelmakohtia näytönohjainlaskennan hyödyntämisessä monimutkaisten kuvankäsittelyalgoritmien laskentaan
Houdard, Antoine. "Some advances in patch-based image denoising." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT005/document.
Full textThis thesis studies non-local methods for image processing, and their application to various tasks such as denoising. Natural images contain redundant structures, and this property can be used for restoration purposes. A common way to consider this self-similarity is to separate the image into "patches". These patches can then be grouped, compared and filtered together.In the first chapter, "global denoising" is reframed in the classical formalism of diagonal estimation and its asymptotic behaviour is studied in the oracle case. Precise conditions on both the image and the global filter are introduced to ensure and quantify convergence.The second chapter is dedicated to the study of Gaussian priors for patch-based image denoising. Such priors are widely used for image restoration. We propose some ideas to answer the following questions: Why are Gaussian priors so widely used? What information do they encode about the image?The third chapter proposes a probabilistic high-dimensional mixture model on the noisy patches. This model adopts a sparse modeling which assumes that the data lie on group-specific subspaces of low dimensionalities. This yields a denoising algorithm that demonstrates state-of-the-art performance.The last chapter explores different way of aggregating the patches together. A framework that expresses the patch aggregation in the form of a least squares problem is proposed
De, Santis Simone. "Quantum Median Filter for Total Variation denoising." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Find full textTuncer, Guney. "A Java Toolbox For Wavelet Based Image Denoising." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12608037/index.pdf.
Full textMichael, Simon. "A Comparison of Data Transformations in Image Denoising." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-375715.
Full textAparnnaa. "Image Denoising and Noise Estimation by Wavelet Transformation." Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1555929391906805.
Full textLind, Johan. "Evaluating CNN-based models for unsupervised image denoising." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176092.
Full textLiu, Xiaoyang. "Advanced numerical methods for image denoising and segmentation." Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11954/.
Full textLiao, Zhiwu. "Image denoising using wavelet domain hidden Markov models." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/616.
Full textKaram, Christina Maria. "Acceleration of Non-Linear Image Filters, and Multi-Frame Image Denoising." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1575976497271633.
Full textZhang, Chen. "Blind Full Reference Quality Assessment of Poisson Image Denoising." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398875743.
Full textMcGraw, Tim E. "Denoising, segmentation and visualization of diffusion weighted MRI." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011618.
Full textMaitree, Rapeepan, Gloria J. Guzman Perez-Carrillo, Joshua S. Shimony, H. Michael Gach, Anupama Chundury, Michael Roach, H. Harold Li, and Deshan Yang. "Adaptive anatomical preservation optimal denoising for radiation therapy daily MRI." SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2017. http://hdl.handle.net/10150/626083.
Full textLee, Kai-wah. "Mesh denoising and feature extraction from point cloud data." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42664330.
Full textMiller, Sarah Victoria. "Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1596444315236623.
Full textLee, Kai-wah, and 李啟華. "Mesh denoising and feature extraction from point cloud data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664330.
Full textQuan, Jin. "Image Denoising of Gaussian and Poisson Noise Based on Wavelet Thresholding." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1380556846.
Full text