To see the other types of publications on this topic, follow the link: Image processing Images.

Dissertations / Theses on the topic 'Image processing Images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Image processing Images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Murphy, Brian P. "Image processing techniques for acoustic images." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26585.

Full text
Abstract:
Approved for public release; distribution is unlimited<br>The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge Detection and Segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering
APA, Harvard, Vancouver, ISO, and other styles
2

Yallop, Marc Richard. "Image processing techniques for passive millimetre wave images." Thesis, University of Reading, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

AbouRayan, Mohamed. "Real-time Image Fusion Processing for Astronomical Images." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461449811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tummala, Sai Virali, and Veerendra Marni. "Comparison of Image Compression and Enhancement Techniques for Image Quality in Medical Images." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Younhee. "Towards lower bounds on distortion in information hiding." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3403.

Full text
Abstract:
Thesis (Ph.D.)--George Mason University, 2008.<br>Vita: p. 133. Thesis directors: Zoran Duric, Dana Richards. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science. Title from PDF t.p. (viewed Mar. 17, 2009). Includes bibliographical references (p. 127-132). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
6

Abdulla, Ghaleb. "An image processing tool for cropping and enhancing images." Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-12232009-020207/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pedron, Ilario. "Digital image processing for cancer cell finding using color images." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ahtaiba, Ahmed Mohamed A. "Restoration of AFM images using digital signal and image processing." Thesis, Liverpool John Moores University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604322.

Full text
Abstract:
All atomic force microscope (AFM) images suffer from distortions, which are principally produced by the interaction between the measured sample and the AFM tip. If the three-dimensional shape of the tip is known, the distorted image can be processed and the original surface form ' restored' typically by deconvolution approaches. This restored image gives a better representation of the real 3D surface or the measured sample than the original distorted image. In this thesis, a quantitative investigation of using morphological deconvolution has been used to restore AFM images via computer simulation using various computer simulated tips and objects. This thesis also presents the systematic quantitative study of the blind tip estimation algorithm via computer simulation using various computer simulated tips and objects. This thesis proposes a new method for estimating the impulse response of the AFM by measuring a micro-cylinder with a-priori known dimensions using contact mode AFM. The estimated impulse response is then used to restore subsequent AFM images, when measured with the same tip, under similar measurement conditions. Significantly, an approximation to what corresponds to the impulse response of the AFM can be deduced using this method. The suitability of this novel approach for restoring AFM images has been confirmed using both computer simulation and also with real experimental AFM images. This thesis suggests another new approach (impulse response technique) to estimate the impulse response of the AFM. this time from a square pillar sample that is measured using contact mode AFM. Once the impulse response is known, a deconvolution process is carried out between the estimated impulse response and typical 'distorted' raw AFM images in order to reduce the distortion effects. The experimental results and the computer simulations validate the performance of the proposed approach, in which it illustrates that the AFM image accuracy has been significantly improved. A new approach has been implemented in this research programme for the restoration of AFM images enabling a combination of cantilever and feedback signals at different scanning speeds. In this approach, the AFM topographic image is constructed using values obtained by summing the height image that is used for driving the Z-scanner and the deflection image with a weight function oc that is close to 3. The value of oc has been determined experimentally using tri al and error. This method has been tested 3t ten different scanning speeds and it consistently gives more faithful topographic images than the original AFM images.
APA, Harvard, Vancouver, ISO, and other styles
9

Wear, Steven M. "Shift-invariant image reconstruction of speckle-degraded images using bispectrum estimation /." Online version of thesis, 1990. http://hdl.handle.net/1850/11219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Karelid, Mikael. "Image Enhancement over a Sequence of Images." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12523.

Full text
Abstract:
<p>This Master Thesis has been conducted at the National Laboratory of Forensic Science (SKL) in Linköping. When images that are to be analyzed at SKL, presenting an interesting object, are of bad quality there may be a need to enhance them. If several images with the object are available, the total amount of information can be used in order to estimate one single enhanced image. A program to do this has been developed by studying methods for image registration and high resolution image estimation. Tests of important parts of the procedure have been conducted. The final results are satisfying and the key to a good high resolution image seems to be the precision of the image registration. Improvements of this part may lead to even better results. More suggestions for further improvementshave been proposed.</p><br><p>Detta examensarbete har utförts på uppdrag av Statens Kriminaltekniska Laboratorium (SKL) i Linköping. Då bilder av ett intressant objekt som ska analyseras på SKL ibland är av dålig kvalitet finns det behov av att förbättra dessa. Om ett flertal bilder på objektet finns tillgängliga kan den totala informationen fråndessa användas för att skatta en enda förbättrad bild. Ett program för att göra detta har utvecklats genom studier av metoder för bildregistrering och skapande av högupplöst bild. Tester av viktiga delar i proceduren har genomförts. De slutgiltiga resultaten är goda och nyckeln till en bra högupplöst bild verkar ligga i precisionen för bildregistreringen. Genom att förbättra denna del kan troligtvis ännu bättre resultat fås. Även andra förslag till förbättringar har lagts fram.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Munechika, Curtis K. "Merging panchromatic and multispectral images for enhanced image analysis /." Online version of thesis, 1990. http://hdl.handle.net/1850/11366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Louridas, Efstathios. "Image processing and analysis of videofluoroscopy images in cleft palate patients." Thesis, University of Kent, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

ROS, RENATO A. "Fusão de imagens médicas para aplicação de sistemas de planejamento de tratamento em radioterapia." reponame:Repositório Institucional do IPEN, 2006. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11417.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:51:40Z (GMT). No. of bitstreams: 0<br>Made available in DSpace on 2014-10-09T14:10:00Z (GMT). No. of bitstreams: 0<br>Tese (Doutoramento)<br>IPEN/T<br>Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
14

Yau, Chin-ko, and 游展高. "Super-resolution image restoration from multiple decimated, blurred and noisy images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30292529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Guarino, de Vasconcelos Luiz Eduardo, André Yoshimi Kusomoto, and Nelson Paiva Oliveira Leite. "Using Image Processing and Pattern Recognition in Images from Head-Up Display." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579665.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV<br>Images frames have always been used as information source for the Flight Test Campaigns (FTC). During the flight tests, the images displayed on the Head-Up Display (HUD) could be stored for later analysis. HUD images presents aircraft data provided by its avionics system. For a simplified Flight Test Instrumentation (FTI), where data accuracy is not a big issue, HUD images could become the primary information source. However in this case data analysis is executed manually, frame by frame for information extraction (e.g. Aircraft position parameters: Latitude; Longitude and Altitude). In approximately one hour of flight test about 36,000 frames are generated using standard-definition television format, therefore data extraction becomes complex, time consuming and prone to failures. To improve efficiency and effectiveness for this FTC, the Instituto de Pesquisas e Ensaios em Voo (IPEV - Flight Test and Research Institute) with Instituto Tecnológico de Aeronáutica (ITA - Aeronautical Technology Institute) developed an image processing application with pattern recognition using the correlation process to extract information from different positions on the images of the HUD. Preliminary test and evaluation carried out by 2012 using HUD images of the jet fighter EMBRAER A1. The test results demonstrate satisfactory performance for this tool.
APA, Harvard, Vancouver, ISO, and other styles
16

Ben, Rabha Jamal Salh. "An image processing decisional system for the Achilles tendon using ultrasound images." Thesis, University of Salford, 2018. http://usir.salford.ac.uk/46561/.

Full text
Abstract:
The Achilles Tendon (AT) is described as the largest and strongest tendon in the human body. As for any other organs in the human body, the AT is associated with some medical problems that include Achilles rupture and Achilles tendonitis. AT rupture affects about 1 in 5,000 people worldwide. Additionally, AT is seen in about 10 percent of the patients involved in sports activities. Today, ultrasound imaging plays a crucial role in medical imaging technologies. It is portable, non-invasive, free of radiation risks, relatively inexpensive and capable of taking real-time images. There is a lack of research that looks into the early detection and diagnosis of AT abnormalities from ultrasound images. This motivated the researcher to build a complete system which enables one to crop, denoise, enhance, extract the important features and classify AT ultrasound images. The proposed application focuses on developing an automated system platform. Generally, systems for analysing ultrasound images involve four stages, pre-processing, segmentation, feature extraction and classification. To produce the best results for classifying the AT, SRAD, CLAHE, GLCM, GLRLM, KPCA algorithms have been used. This was followed by the use of different standard and ensemble classifiers trained and tested using the dataset samples and reduced features to categorize the AT images into normal or abnormal. Various classifiers have been adopted in this research to improve the classification accuracy. To build an image decisional system, a 57 AT ultrasound images has been collected. These images were used in three different approaches where the Region of Interest (ROI) position and size are located differently. To avoid the imbalanced misleading metrics, different evaluation metrics have been adapted to compare different classifiers and evaluate the whole classification accuracy. The classification outcomes are evaluated using different metrics in order to estimate the decisional system performance. A high accuracy of 83% was achieved during the classification process. Most of the ensemble classifies worked better than the standard classifiers in all the three ROI approaches. The research aim was achieved and accomplished by building an image processing decisional system for the AT ultrasound images. This system can distinguish between normal and abnormal AT ultrasound images. In this decisional system, AT images were improved and enhanced to achieve a high accuracy of classification without any user intervention.
APA, Harvard, Vancouver, ISO, and other styles
17

Gopalan, Sowmya. "Estimating Columnar Grain Size in Steel-Weld Images using Image Processing Techniques." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250621610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Das, Mohammed. "Image analysis techniques for vertebra anomaly detection in X-ray images." Diss., Rolla, Mo. : University of Missouri--Rolla i.e. [Missouri University of Science and Technology], 2008. http://scholarsmine.mst.edu/thesis/MohammedDas_Thesis_09007dcc804c3cf6.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2008.<br>Degree granted by Missouri University of Science and Technology, formerly known as University of Missouri--Rolla. Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed March 24, 2008) Includes bibliographical references (p. 87-88).
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Chia-Chin. "Image quality as a function of unsharp masking band center /." Online version of thesis, 1988. http://hdl.handle.net/1850/10420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kim, Kyu-Heon. "Segmentation of natural texture images using a robust stochastic image model." Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cabrera, Gil Blanca. "Deep Learning Based Deformable Image Registration of Pelvic Images." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279155.

Full text
Abstract:
Deformable image registration is usually performed manually by clinicians,which is time-consuming and costly, or using optimization-based algorithms, which are not always optimal for registering images of different modalities. In this work, a deep learning-based method for MR-CT deformable image registration is presented. In the first place, a neural network is optimized to register CT pelvic image pairs. Later, the model is trained on MR-CT image pairs to register CT images to match its MR counterpart. To solve the unavailability of ground truth data problem, two approaches were used. For the CT-CT case, perfectly aligned image pairs were the starting point of our model, and random deformations were generated to create a ground truth deformation field. For the multi-modal case, synthetic CT images were generated from T2-weighted MR using a CycleGAN model, plus synthetic deformations were applied to the MR images to generate ground truth deformation fields. The synthetic deformations were created by combining a coarse and fine deformation grid, obtaining a field with deformations of different scales. Several models were trained on images of different resolutions. Their performance was benchmarked with an analytic algorithm used in an actual registration workflow. The CT-CT models were tested using image pairs created by applying synthetic deformation fields. The MR-CT models were tested using two types of test images. The first one contained synthetic CT images and MR ones deformed by synthetically generated deformation fields. The second test set contained real MR-CT image pairs. The test performance was measured using the Dice coefficient. The CT-CT models obtained Dice scores higherthan 0.82 even for the models trained on lower resolution images. Despite the fact that all MR-CT models experienced a drop in their performance, the biggest decrease came from the analytic method used as a reference, both for synthetic and real test data. This means that the deep learning models outperformed the state-of-the-art analytic benchmark method. Even though the obtained Dice scores would need further improvement to be used in a clinical setting, the results show great potential for using deep learning-based methods for multi- and mono-modal deformable image registration.
APA, Harvard, Vancouver, ISO, and other styles
22

Karlsson, Simon, and Per Welander. "Generative Adversarial Networks for Image-to-Image Translation on Street View and MR Images." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148475.

Full text
Abstract:
Generative Adversarial Networks (GANs) is a deep learning method that has been developed for synthesizing data. One application for which it can be used for is image-to-image translations. This could prove to be valuable when training deep neural networks for image classification tasks. Two areas where deep learning methods are used are automotive vision systems and medical imaging. Automotive vision systems are expected to handle a broad range of scenarios which demand training data with a high diversity. The scenarios in the medical field are fewer but the problem is instead that it is difficult, time consuming and expensive to collect training data. This thesis evaluates different GAN models by comparing synthetic MR images produced by the models against ground truth images. A perceptual study is also performed by an expert in the field. It is shown by the study that the implemented GAN models can synthesize visually realistic MR images. It is also shown that models producing more visually realistic synthetic images not necessarily have better results in quantitative error measurements, when compared to ground truth data. Along with the investigations on medical images, the thesis explores the possibilities of generating synthetic street view images of different resolution, light and weather conditions. Different GAN models have been compared, implemented with our own adjustments, and evaluated. The results show that it is possible to create visually realistic images for different translations and image resolutions.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Shyi-Shyang. "Comparing the ability of subjective quality factor and information theory to predict image quality /." Online version of thesis, 1994. http://hdl.handle.net/1850/11880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Glotfelty, Joseph Edmund. "Automatic selection of optimal window size and shape for texture analysis." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=898.

Full text
Abstract:
Thesis (M.A.)--West Virginia University, 1999.<br>Title from document title page. Document formatted into pages; contains vii, 59 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 55-59).
APA, Harvard, Vancouver, ISO, and other styles
25

Schultz, Leah Hastings Samantha K. "Image manipulation and user-supplied index terms." [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/permalink/meta-dc-9828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Madaris, Aaron T. "Characterization of Peripheral Lung Lesions by Statistical Image Processing of Endobronchial Ultrasound Images." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1485517151147533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Neupane, Aashish. "Visual Saliency Analysis on Fashion Images Using Image Processing and Deep Learning Approaches." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2784.

Full text
Abstract:
ABSTRACTAASHISH NEUPANE, for the Master of Science degree in BIOMEDICAL ENGINEERING, presented on July 35, 2020, at Southern Illinois University Carbondale. TITLE: VISUAL SALIENCY ANALYSIS ON FASHION IMAGES USING IMAGE PROCESSING AND DEEP LEARNING APPROACHES.MAJOR PROFESSOR: Dr. Jun QinState-of-art computer vision technologies have been applied in fashion in multiple ways, and saliency modeling is one of those applications. In computer vision, a saliency map is a 2D topological map which indicates the probabilistic distribution of visual attention priorities. This study is focusing on analysis of the visual saliency on fashion images using multiple saliency models, evaluated by several evaluation metrics. A human subject study has been conducted to collect people’s visual attention on 75 fashion images. Binary ground-truth fixation maps for these images have been created based on the experimentally collected visual attention data using Gaussian blurring function. Saliency maps for these 75 fashion images were generated using multiple conventional saliency models as well as deep feature-based state-of-art models. DeepFeat has been studied extensively, with 44 sets of saliency maps, exploiting the features extracted from GoogLeNet and ResNet50. Seven other saliency models have also been utilized to predict saliency maps on these images. The results were compared over 5 evaluation metrics – AUC, CC, KL Divergence, NSS and SIM. The performance of all 8 saliency models on prediction of visual attention on fashion images over all five metrics were comparable to the benchmarked scores. Furthermore, the models perform well consistently over multiple evaluation metrics, thus indicating that saliency models could in fact be applied to effectively predict salient regions in random fashion advertisement images.
APA, Harvard, Vancouver, ISO, and other styles
28

Zeng, Gang. "Surface reconstruction from images /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?COMP%202006%20ZENG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Le, Van Linh. "Automatic landmarking for 2D biological images : image processing with and without deep learning methods." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0238.

Full text
Abstract:
Les points de repère sont présentés dans les applications de différents domaines tels que le biomédical ou le biologique. C’est également l’un des types de données qui ont été utilisés dans différentes analyses, par exemple, ils ne sont pas seulement utilisés pour mesurer la forme de l’objet, mais également pour déterminer la similarité entre deux objets. En biologie, les repères sont utilisés pour analyser les variations inter-organismes. Cependant, l’offre de repères est très lourde et le plus souvent, ils sont fournis manuellement. Ces dernières années, plusieurs méthodes ont été proposées pour prédire automatiquement les points de repère, mais la dureté existe, car elles se sont concentrées sur des données spécifiques. Cette thèse porte sur la détermination automatique de points de repère sur des images biologiques, plus spécifiquement sur d’images 2D des coléoptères. Dans le cadre de nos recherches, nous avons collaboré avec des biologistes pour créer un ensemble de données comprenantles images de 293 coléoptères. Pour chaque coléoptère de cette donnée, 5 images corre-spondent à 5 parties prises en compte, par exemple tête, élytre, pronotum, mandibule gauche et droite. Avec chaque image, un ensemble de points de repère a été proposé manuellement par les biologistes. La première étape, nous avons apporté une méthode qui a été appliquée sur les ailes de mouche, à appliquer sur notre jeu de données dans le but de tester la pertinence des techniques de traitement d’image sur notre problème. Deuxièmement, nous avons développé une méthode en plusieurs étapes pour fournir automatiquement les points de repère sur les images. Ces deux premières étapes ont été effectuées sur les images de la mandibule qui sont considérées comme évidentes pour l’utilisation des méthodes de traitement d’images. Troisièmement, nous avons continué à considérer d’autres parties complexes restantes de coléoptères. En conséquence, nous avons utilisé l’aide de Deep Learning. Nous avons conçu un nouveau modèle de Convolutional Neural Network, nommé EB-Net, pour prédire les points de repère sur les images restantes. De plus, nous avons proposé une nouvelle procédurepour augmenter le nombre d’images dans notre jeu de données, ce qui est considéré comme notre limite à appliquer Deep Learning. Enfin, pour améliorer la qualité des coordonnées prédites, nous avons utilisé Transfer Learning, une autre technique de Deep Learning. Pour ce faire, nous avons formé EB-Net sur les points clés publics du visage. Ensuite, ils ont été transférés pour affiner les images de coléoptère. Les résultats obtenus ont été discutés avec les biologistes et ils ont confirmé que la qualité des repéres prédits est suffisamment bonne sur la plane statistique pour remplacer les repères manuels pour la plupart des analyses de morphométrie différentes<br>Landmarks are presented in the applications of different domains such as biomedical or biological. It is also one of the data types which have been usedin different analysis, for example, they are not only used for measuring the form of the object, but also for determining the similarity between two objects. In biology, landmarks are used to analyze the inter-organisms variations, however the supply of landmarks is very heavy and most often they are provided manually. In recent years, several methods have been proposed to automatically predict landmarks, but it is existing the hardness because these methods focused on the specific data. This thesis focuses on automatic determination of landmarks on biological images, more specifically on two-dimensional images of beetles. In our research, we have collaborated with biologists to build a dataset including the images of 293 beetles. For each beetle in this dataset, 5 images correspond to 5 parts have been taken into account, e.g., head, body, pronotum, left and right mandible. Along with each image, a set of landmarks has been manually proposed by biologists. First step, we have brought a method whichwas applied on fly wings, to apply on our dataset with the aim to test the suitability of image processing techniques on our problem. Secondly, we have developed a method consisting of several stages to automatically provide the landmarks on the images.These two first steps have been done on the mandible images which are considered as obvious to use the image processing methods. Thirdly, we have continued to consider other complex remaining parts of beetles. Accordingly, we have used the help of Deep Learning. We have designed a new model of Convolutional Neural Network, named EB-Net, to predict the landmarks on remaining images. In addition, we have proposed a new procedure to augment the number of images in our dataset, which is seen as our limitation to apply deep learning. Finally, to improve the quality of predicted coordinates, we have employed Transfer Learning, another technique of Deep Learning. In order to do that, we trained EB-Net on a public facial keypoints. Then, they were transferred to fine-tuning on beetle’s images. The obtained results have been discussed with biologists, and they have confirmed that the quality of predicted landmarks is statistically good enough to replace the manual landmarks for most of the different morphometry analysis
APA, Harvard, Vancouver, ISO, and other styles
30

Deguillaume, Frédéric. "Hybrid robust watermarking and tamperproofing of visual media /." Genève : Université de Genève, 2002. http://www.loc.gov/catdir/toc/fy0707/2005438894.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Nutavej, Apiwat. "Study of the graininess models using the Macintosh computer /." Online version of thesis, 1987. http://hdl.handle.net/1850/10154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Holm, Jack. "A log NEQ based comparison of several silver halide and electronic pictorial imaging systems /." Online version of thesis, 1993. http://hdl.handle.net/1850/11736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Demeulemeester, Kilian. "Needle Localization in Ultrasound Images : FULL NEEDLE AXIS AND TIP LOCALIZATION IN ULTRASOUND IMAGES USING GPS DATA AND IMAGE PROCESSING." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180454.

Full text
Abstract:
Many medical interventions involve ultrasound based imaging systems to safely localize and navigate instruments into the patient body. To facilitate visual tracking of the instruments, we investigate the techniques and methodologies best suited for solving the problem of needle localization in ultrasound images. We propose a robust procedure that automatically determines the position of a needle in 2D ultrasound images. Such a task is decomposed into the localization of the needle axis and its tip. A first estimation of the axis position is computed with the help of multiple position sensors, including one embedded in the transducer and another in the needle. Based on this, the needle axis is computed using a RANSAC algorithm. The tip is detected by analyzing the intensity along the axis and a Kalman filter is added to compensate for measurement uncertainties. The algorithms were experimentally verified on real ultrasound images acquired by a 2D scanner scanning a portion of a cryogel phantom that contained a thin metallic needle. The experiments shows that the algorithms are capable of detecting a needle at millimeter accuracy.The computational time of the order of milliseconds permits real time needle localization.
APA, Harvard, Vancouver, ISO, and other styles
34

Pérez, Benito Cristina. "Color Image Processing based on Graph Theory." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/123955.

Full text
Abstract:
[ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.<br>[CAT] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.<br>[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.<br>Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
35

Udas, Swati. "Classification algorithms for finding the eye fixation from digital images /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p1418072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Roman-Gonzalez, Avid. "Compression Based Analysis of Image Artifacts: Application to Satellite Images." Phd thesis, Telecom ParisTech, 2013. http://tel.archives-ouvertes.fr/tel-00935029.

Full text
Abstract:
This thesis aims at an automatic detection of artifacts in optical satellite images such as aliasing, A/D conversion problems, striping, and compression noise; in fact, all blemishes that are unusual in an undistorted image. Artifact detection in Earth observation images becomes increasingly difficult when the resolution of the image improves. For images of low, medium or high resolution, the artifact signatures are sufficiently different from the useful signal, thus allowing their characterization as distortions; however, when the resolution improves, the artifacts have, in terms of signal theory, a similar signature to the interesting objects in an image. Although it is more difficult to detect artifacts in very high resolution images, we need analysis tools that work properly, without impeding the extraction of objects in an image. Furthermore, the detection should be as automatic as possible, given the quantity and ever-increasing volumes of images that make any manual detection illusory. Finally, experience shows that artifacts are not all predictable nor can they be modeled as expected. Thus, any artifact detection shall be as generic as possible, without requiring the modeling of their origin or their impact on an image. Outside the field of Earth observation, similar detection problems have arisen in multimedia image processing. This includes the evaluation of image quality, compression, watermarking, detecting attacks, image tampering, the montage of photographs, steganalysis, etc. In general, the techniques used to address these problems are based on direct or indirect measurement of intrinsic information and mutual information. Therefore, this thesis has the objective to translate these approaches to artifact detection in Earth observation images, based particularly on the theories of Shannon and Kolmogorov, including approaches for measuring rate-distortion and pattern-recognition based compression. The results from these theories are then used to detect too low or too high complexities, or redundant patterns. The test images being used are from the satellite instruments SPOT, MERIS, etc. We propose several methods for artifact detection. The first method is using the Rate-Distortion (RD) function obtained by compressing an image with different compression factors and examines how an artifact can result in a high degree of regularity or irregularity affecting the attainable compression rate. The second method is using the Normalized Compression Distance (NCD) and examines whether artifacts have similar patterns. The third method is using different approaches for RD such as the Kolmogorov Structure Function and the Complexity-to-Error Migration (CEM) for examining how artifacts can be observed in compression-decompression error maps. Finally, we compare our proposed methods with an existing method based on image quality metrics. The results show that the artifact detection depends on the artifact intensity and the type of surface cover contained in the satellite image.
APA, Harvard, Vancouver, ISO, and other styles
37

Andersen, Evan. "An analysis of the art image interchange cycle within fine art museums /." Online version of thesis, 2010. http://hdl.handle.net/1850/11981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Mohamed, Aamer S. S. "From content-based to semantic image retrieval. Low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4438.

Full text
Abstract:
Digital image archiving urgently requires advanced techniques for more efficient storage and retrieval methods because of the increasing amount of digital. Although JPEG supply systems to compress image data efficiently, the problems of how to organize the image database structure for efficient indexing and retrieval, how to index and retrieve image data from DCT compressed domain and how to interpret image data semantically are major obstacles for further development of digital image database system. In content-based image, image analysis is the primary step to extract useful information from image databases. The difficulty in content-based image retrieval is how to summarize the low-level features into high-level or semantic descriptors to facilitate the retrieval procedure. Such a shift toward a semantic visual data learning or detection of semantic objects generates an urgent need to link the low level features with semantic understanding of the observed visual information. To solve such a -semantic gap¿ problem, an efficient way is to develop a number of classifiers to identify the presence of semantic image components that can be connected to semantic descriptors. Among various semantic objects, the human face is a very important example, which is usually also the most significant element in many images and photos. The presence of faces can usually be correlated to specific scenes with semantic inference according to a given ontology. Therefore, face detection can be an efficient tool to annotate images for semantic descriptors. In this thesis, a paradigm to process, analyze and interpret digital images is proposed. In order to speed up access to desired images, after accessing image data, image features are presented for analysis. This analysis gives not only a structure for content-based image retrieval but also the basic units ii for high-level semantic image interpretation. Finally, images are interpreted and classified into some semantic categories by semantic object detection categorization algorithm.
APA, Harvard, Vancouver, ISO, and other styles
39

Karathanou, Argyro. "Image processing for on-line analysis of electron microscope images : automatic Recognition of Reconstituted Membranes." Phd thesis, Université de Haute Alsace - Mulhouse, 2009. http://tel.archives-ouvertes.fr/tel-00559800.

Full text
Abstract:
The image analysis techniques presented in the présent thesis have been developed as part of a European projeet dedicated to the development of an automatic membrane protein crystallization pipeline. A large number of samples is simultaneously produced and assessed by transmission electron microscope (TEM) screening. Automating this fast step implicates an on-fine analysis of acquired images to assure the microscope control by selecting the regions to be observed at high magnification and identify the components for specimen characterization.The observation of the sample at medium magnification provides the information that is essential to characterize the success of the 2D crystallization. The resulting objects, and especially the artificial membranes, are identifiable at this scale. These latter present only a few characteristic signatures, appearing in an extremely noisy context with gray-level fluctuations. Moreover they are practically transparent to electrons yielding low contrast. This thesis presents an ensemble of image processing techniques to analyze medium magnification images (5-15 nm/pixel). The original contribution of this work lies in: i) a statistical evaluation of contours by measuring the correlation between gray-levels of neighbouring pixels to the contour and a gradient signal for over-segmentation reduction, ii) the recognition of foreground entities of the image and iii) an initial study for their classification. This chain has been already tested on-line on a prototype and is currently evaluated.
APA, Harvard, Vancouver, ISO, and other styles
40

Gundersen, Henrik Mogens, and Bjørn Fossan Rasmussen. "An Application of Image Processing Techniques for Enhancement and Segmentation of Bruises in Hyperspectral Images." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9594.

Full text
Abstract:
<p>Hyperspectral images contain vast amounts of data which can provide crucial information to applications within a variety of scientific fields. Increasingly powerful computer hardware has made it possible to efficiently treat and process hyperspectral images. This thesis is interdisciplinary and focuses on applying known image processing algorithms to a new problem domain, involving bruises on human skin in hyperspectral images. Currently, no research regarding image detection of bruises on human skin have been uncovered. However, several articles have been written on hyperspectral bruise detection on fruits and vegetables. Ratio, difference and principal component analysis (PCA) were commonly applied enhancement algorithms within this field. The three algorithms, in addition to K-means clustering and the watershed segmentation algorithm, have been implemented and tested through a batch application developed in C# and MATLAB. The thesis seeks to determine if the enhancement algorithms can be applied to improve bruise visibility in hyperspectral images for visual inspection. In addition, it also seeks to answer if the enhancements provide a better segmentation basis. Known spectral characteristics form the experimentation basis in addition to identification through visual inspection. To this end, a series of experiments were conducted. The tested algorithms provided a better description of the bruises, the extent of the bruising, and the severity of damage. However, the algorithms tested are not considered robust for consistency of results. It is therefore recommended that the image acquisition setup is standardised for all future hyperspectral images. A larger, more varied data set would increase the statistical power of the results, and improve test conclusion validity. Results indicate that the ratio, difference, and principal component analysis (PCA) algorithms can enhance bruise visibility for visual analysis. However, images that contained weakly visible bruises did not show significant improvements in bruise visibility. Non-visible bruises were not made visible using the enhancement algorithms. Results from the enhancement algorithms were segmented and compared to segmentations of the original reflectance images. The enhancement algorithms provided results that gave more accurate bruise regions using K-means clustering and the watershed segmentation. Both segmentation algorithms gave the overall best results using principal components as input. Watershed provided less accurate segmentations of the input from the difference and ratio algorithms.</p>
APA, Harvard, Vancouver, ISO, and other styles
41

Mohamed, Aamer Saleh Sahel. "From content-based to semantic image retrieval : low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4438.

Full text
Abstract:
Digital image archiving urgently requires advanced techniques for more efficient storage and retrieval methods because of the increasing amount of digital. Although JPEG supply systems to compress image data efficiently, the problems of how to organize the image database structure for efficient indexing and retrieval, how to index and retrieve image data from DCT compressed domain and how to interpret image data semantically are major obstacles for further development of digital image database system. In content-based image, image analysis is the primary step to extract useful information from image databases. The difficulty in content-based image retrieval is how to summarize the low-level features into high-level or semantic descriptors to facilitate the retrieval procedure. Such a shift toward a semantic visual data learning or detection of semantic objects generates an urgent need to link the low level features with semantic understanding of the observed visual information. To solve such a 'semantic gap' problem, an efficient way is to develop a number of classifiers to identify the presence of semantic image components that can be connected to semantic descriptors. Among various semantic objects, the human face is a very important example, which is usually also the most significant element in many images and photos. The presence of faces can usually be correlated to specific scenes with semantic inference according to a given ontology. Therefore, face detection can be an efficient tool to annotate images for semantic descriptors. In this thesis, a paradigm to process, analyze and interpret digital images is proposed. In order to speed up access to desired images, after accessing image data, image features are presented for analysis. This analysis gives not only a structure for content-based image retrieval but also the basic units ii for high-level semantic image interpretation. Finally, images are interpreted and classified into some semantic categories by semantic object detection categorization algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Cheriyadat, Anil Meerasa. "Limitations of principal component analysis for dimensionality-reduction for classification of hyperspectral data." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-11072003-133109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Coleman, Sonya. "Scalable operators for adaptive processing of digital images." Thesis, University of Ulster, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cook, Anthony John. "Digital image processing using colour space transformation." Thesis, University of Hertfordshire, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323433.

Full text
Abstract:
The purpose of the work is to explore the feasibility of devising a computer system that implements the desirable effects of a photographic filter and provides an environment for colour filter design for image processing. Using conversion from RGB to the CIELUV colour space a new method for the implementation of photographic filter as a digital filter is described. A filter is implemented by converting image pixel rgb values into CIELUV (u', v') and L* values and operates using the visual wavelength values provided by the (u', v') chromaticity diagram. However, the (u', v') diagram cannot provide wavelength values for pixels that correspond to (u', v') points in the `purple line' sector of the diagram. These pixels are allocated wavelengths by means of a new wavelengths scale that makes it possible for the filter to process any pixel in a digital image. Filter transmittance data for visual spectrum wavelengths is obtained from published tables. The transmittance data for purple sector pixels is provided by a colour model of the (u', v') chromaticity diagram. The system is evaluated by means of the Macbeth ColorChecker chart and the use of physical measurements. The extension of the CIELUV diagram with an equivalent wavelength scale provides a new environment for the enhancement and manipulation of digital colour images.
APA, Harvard, Vancouver, ISO, and other styles
45

Jin, Song. "Bispectral reconstruction of speckle-degraded images /." Online version of thesis, 1992. http://hdl.handle.net/1850/11230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Patel, Jasmica. "Detection and measurement of multiple sclerosis brain lesions from magnetic resonance images using image processing techniques." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0019/MQ48285.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Usta, Fatma. "Image Processing Methods for Myocardial Scar Analysis from 3D Late-Gadolinium Enhanced Cardiac Magnetic Resonance Images." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37920.

Full text
Abstract:
Myocardial scar, a non-viable tissue which occurs on the myocardium due to the insufficient blood supply to the heart muscle, is one of the leading causes of life-threatening heart disorders, including arrhythmias. Analysis of myocardial scar is important for predicting the risk of arrhythmia and locations of re-entrant circuits in patients’ hearts. For applications, such as computational modeling of cardiac electrophysiology aimed at stratifying patient risk for post-infarction arrhythmias, reconstruction of the intact geometry of scar is required. Currently, 2D multi-slice late gadolinium-enhanced magnetic resonance imaging (LGEMRI) is widely used to detect and quantify myocardial scar regions of the heart. However, due to the anisotropic spatial dimensions in 2D LGE-MR images, creating scar geometry from these images results in substantial reconstruction errors. For applications requiring reconstructing the intact geometry of scar surfaces, 3D LGE-MR images are more suited as they are isotropic in voxel dimensions and have a higher resolution. While many techniques have been reported for segmentation of scar using 2D LGEMR images, the equivalent studies for 3D LGE-MRI are limited. Most of these 2D and 3D techniques are basic intensity threshold-based methods. However, due to the lack of optimum threshold (Th) value, these intensity threshold-based methods are not robust in dealing with complex scar segmentation problems. In this study, we propose an algorithm for segmentation of myocardial scar from 3D LGE-MR images based on Markov random field based continuous max-flow (CMF) method. We utilize the segmented myocardium as the region of interest for our algorithm. We evaluated our CMF method for accuracy by comparing its results to manual delineations using 3D LGE-MR images of 34 patients. We also compared the results of the CMF technique to ones by conventional full-width-at-half-maximum (FWHM) and signal-threshold-to-reference-mean (STRM) methods. The CMF method yields a Dice similarity coefficient (DSC) of 71 +- 8.7% and an absolute volume error (|VE|) of 7.56 +- 7 cm3. Overall, the CMF method outperformed the conventional methods for almost all reported metrics in scar segmentation. We present a comparison study for scar geometries obtained from 2D vs 3D LGE-MRI. As the myocardial scar geometry greatly influences the sensitivity of risk prediction in patients, we compare and understand the differences in reconstructed geometry of scar generated using 2D versus 3D LGE-MR images beside providing a scar segmentation study. We use a retrospectively acquired dataset of 24 patients with a myocardial scar who underwent both 2D and 3D LGE-MR imaging. We use manually segmented scar volumes from 2D and 3D LGE-MRI. We then reconstruct the 2D scar segmentation boundaries to 3D surfaces using a LogOdds-based interpolation method. We use numerous metrics to quantify and analyze the scar geometry including fractal dimensions, the number-of-connected-components, and mean volume difference. The higher 3D fractal dimension results indicate that the 3D LGE-MRI produces a more complex surface geometry by better capturing the sparse nature of the scar. Finally, 3D LGE-MRI produces a larger scar surface volume (27.49 +- 20.38 cm3) than 2D-reconstructed LGE-MRI (25.07 +- 16.54 cm3).
APA, Harvard, Vancouver, ISO, and other styles
48

Lazareva, A. "An automated image processing system for the detection of photoreceptor cells in adaptive optics retinal images." Thesis, City, University of London, 2017. http://openaccess.city.ac.uk/19164/.

Full text
Abstract:
The rapid progress in Adaptive Optics (AO) imaging, in the last decades, has had a transformative impact on the entire approach underpinning the investigations of retinal tissues. Capable of imaging the retina in vivo at the cellular level, AO systems have revealed new insights into retinal structures, function, and the origins of various retinal pathologies. This has expanded the field of clinical research and opened a wide range of applications for AO imaging. The advances in image processing techniques contribute to a better observation of retinal microstructures and therefore more accurate detection of pathological conditions. The development of automated tools for processing images obtained with AO allows for objective examination of a larger number of images with time and cost savings and thus facilitates the use of AO imaging as a practical and efficient tool, by making it widely accessible to the clinical ophthalmic community. In this work, an image processing framework is developed that allows for enhancement of AO high-resolution retinal images and accurate detection of photoreceptor cells. The proposed framework consists of several stages: image quality assessment, illumination compensation, noise suppression, image registration, image restoration, enhancement and detection of photoreceptor cells. The visibility of retinal features is improved by tackling specific components of the AO imaging system, affecting the quality of acquired retinal data. Therefore, we attempt to fully recover AO retinal images, free from any induced degradation effects. A comparative study of different methods and evaluation of their efficiency on retinal datasets is performed by assessing image quality. In order to verify the achieved results, the cone packing density distribution was calculated and correlated with statistical histological data. From the performed experiments, it can be concluded that the proposed image processing framework can effectively improve photoreceptor cell image quality and thus can serve as a platform for further investigation of retinal tissues. Quantitative analysis of the retinal images obtained with the proposed image processing framework can be used for comparison with data related to pathological retinas, as well as for understanding the effect of age and retinal pathology on cone packing density and other microstructures.
APA, Harvard, Vancouver, ISO, and other styles
49

Moon, Bill. "Employment of Crystallographic Image Processing Techniques to Scanning Probe Microscopy Images of Two-Dimensional Periodic Objects." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/699.

Full text
Abstract:
Thin film arrays of molecules or supramolecules are active subjects of investigation because of their potential value in electronics, chemical sensing, catalysis, and other areas. Scanning probe microscopes (SPMs), including scanning tunneling microscopes (STMs) and atomic force microscopes (AFMs) are commonly used for the characterization and metrology of thin film arrays. As opposed to transmission electron microscopy (TEM), SPMs have the advantage that they can often make observations of thin films in air or liquid, while TEM requires highly specialized techniques if the sample is to be in anything but vacuum. SPM is a surface imaging technique, while TEM typically images a 2D projection of a thin 3D sample. Additionally, variants of SPM can make observations of more than just topography; for instance, magnetic force microscopy measures nanoscale magnetic properties. Thin film arrays are typically two-dimensionally periodic. A perfect, infinite two-dimensionally periodic array is mathematically constrained to belong to one of only 17 possible 2D plane symmetry groups. Any real image is both finite and imperfect. Crystallographic Image Processing (CIP) is an algorithm that Fourier transforms a real image into a 2D array of complex numbers, the Fourier coefficients of the image intensity, and then uses the relationship between those coefficients to first ascertain the 2D plane symmetry group that the imperfect, finite image is most likely to possess, and then adjust those coefficients that are symmetry-related so as to perfect the symmetry. A Fourier synthesis of the symmetrized coefficients leads to a perfectly symmetric image in direct space (when accumulated rounding and calculation errors are ignored). The technique is, thus, an averaging technique over the direct space experimental data that were selected from the thin film array. The image must have periodicity in two dimensions in order for this technique to be applicable. CIP has been developed over the past 40 years by the electron crystallography community, which works with 2D projections from 3D samples. Any periodic sample, whether it is 2D or 3D has an "ideal structure" which is the structure absent any crystal defects. The ideal structure can be considered one average unit cell, propagated by translation into the whole sample. The "real structure" is an actual sample containing vacancies, dislocations, and other defects. Typically the goal of electron and other types of microscopy is examination of the real structure, as the ideal structure of a crystal is already known from X-ray crystallography. High resolution transmission electron microscope image based electron crystallography, on the other hand, reveals the ideal crystal structure by crystallographic averaging. The ideal structure of a 2D thin film cannot be easily in a spatially selective fashion examined by grazing incidence X-ray or low energy electron diffraction based crystallography. SPMs straightforwardly observe thin films in direct space, but SPM accuracy is hampered by blunt or multiple tips and other unavoidable instrument errors. Especially since the film is often of a supramolecular system whose molecules are weakly bonded (via pi bonds, hydrogen bonds, etc.) both to the substrate and to each other, it is relatively easy for a molecule from the film to adhere to the scanning tip during the scan and become part of the tip during subsequent observation. If the thin film array has two-dimensional periodicity, CIP is a unique and effective tool both for image enhancement (determination of ideal structure) and for the quantification of overall instrument error. In addition, if a sample of known 2D periodicity is scanned, CIP can return information about the contribution of the instrument itself to the image. In this thesis we show how the technique is applied to images of two dimensionally periodic samples taken by SPMs. To the best of our knowledge, this has never been done before. Since 2D periodic thin film arrays have an ideal structure that is mathematically constrained to belong to one of the 17 plane symmetry groups, we can use CIP to determine that group and use it for a particularly effective averaging algorithm. We demonstrate that the use of this averaging algorithm removes noise and random error from images more effectively than translational averaging, also known as "lattice averaging" or "Fourier filtering". We also demonstrate the ability to correct systematic errors caused by hysteresis in the scanning process. These results have the effect of obtaining the ideal structure of the sample, averaging out the defects crystallographically, by providing an average unit cell which, when translated, represents the ideal structure. In addition, if one has recorded a scanning probe image of a 2D periodic sample of known symmetry, we demonstrate that it is possible to use the Fourier coefficients of the image transform to solve the inverse problem and calculate the point spread function (PSF) of the instrument. Any real scanning probe instrument departs from the ideal PSF of a Dirac delta function, and CIP allows us to quantify this departure as far as point symmetries are concerned. The result is a deconvolution of the "effective tip", which includes any blunt or multiple tip effects, as well as the effects caused by adhesion of a sample molecule to the scanning tip, or scanning irregularities unrelated to the physical tip. We also demonstrate that the PSF, once known, can be used on a second image taken by the same instrument under approximately the same experimental conditions to remove errors introduced during that second imaging process. The preponderance of two-dimensionally periodic samples as subjects of SPM observation makes the application of CIP to SPM images a valuable technique to extract a maximum amount of information from these images. The improved resolution of current SPMs creates images with more higher-order Fourier coefficients than earlier, "softer" images; these higher-order coefficients are especially amenable to CIP, which can then effectively magnify the resolution improvement created by better hardware. The improved resolution combined with the current interest in supramolecular structures (which although 3D usually start building on a 2D periodic surface) appears to provide an opportunity for CIP to significantly contribute to SPM image processing.
APA, Harvard, Vancouver, ISO, and other styles
50

Gonzalez, Ana Guadalupe Salazar. "Structure analysis and lesion detection from retinal fundus images." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/6456.

Full text
Abstract:
Ocular pathology is one of the main health problems worldwide. The number of people with retinopathy symptoms has increased considerably in recent years. Early adequate treatment has demonstrated to be effective to avoid the loss of the vision. The analysis of fundus images is a non intrusive option for periodical retinal screening. Different models designed for the analysis of retinal images are based on supervised methods, which require of hand labelled images and processing time as part of the training stage. On the other hand most of the methods have been designed under the basis of specific characteristics of the retinal images (e.g. field of view, resolution). This compromises its performance to a reduce group of retinal image with similar features. For these reasons an unsupervised model for the analysis of retinal image is required, a model that can work without human supervision or interaction. And that is able to perform on retinal images with different characteristics. In this research, we have worked on the development of this type of model. The system locates the eye structures (e.g. optic disc and blood vessels) as first step. Later, these structures are masked out from the retinal image in order to create a clear field to perform the lesion detection. We have selected the Graph Cut technique as a base to design the retinal structures segmentation methods. This selection allows incorporating prior knowledge to constraint the searching for the optimal segmentation. Different link weight assignments were formulated in order to attend the specific needs of the retinal structures (e.g. shape). This research project has put to work together the fields of image processing and ophthalmology to create a novel system that contribute significantly to the state of the art in medical image analysis. This new knowledge provides a new alternative to address the analysis of medical images and opens a new panorama for researchers exploring this research area.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography