To see the other types of publications on this topic, follow the link: Domain of images.

Dissertations / Theses on the topic 'Domain of images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Domain of images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Thornström, Johan. "Domain Adaptation of Unreal Images for Image Classification." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165758.

Full text
Abstract:
Deep learning has been intensively researched in computer vision tasks like im-age classification. Collecting and labeling images that these neural networks aretrained on is labor-intensive, which is why alternative methods of collecting im-ages are of interest. Virtual environments allow rendering images and automaticlabeling,  which could speed up the process of generating training data and re-duce costs.This  thesis  studies  the  problem  of  transfer  learning  in  image  classificationwhen the classifier has been trained on rendered images using a game engine andtested on real images. The goal is to render images using a game engine to createa classifier that can separate images depicting people wearing civilian clothingor camouflage.  The thesis also studies how domain adaptation techniques usinggenerative  adversarial  networks  could  be  used  to  improve  the  performance  ofthe classifier.  Experiments show that it is possible to generate images that canbe used for training a classifier capable of separating the two classes.  However,the experiments with domain adaptation were unsuccessful.  It is instead recom-mended to improve the quality of the rendered images in terms of features usedin the target domain to achieve better results.
APA, Harvard, Vancouver, ISO, and other styles
2

Manamasa, Krishna Himaja. "Domain adaptation from 3D synthetic images to real images." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19303.

Full text
Abstract:
Background. Domain adaptation is described as, a model learning from a source data distribution and performing well on the target data. This concept, Domain adaptation is applied to assembly-line production tasks to perform an automatic quality inspection. Objectives. The aim of this master thesis is to apply this concept of 3D domain adaptation from synthetic images to real images. It is an attempt to bridge the gap between different domains (synthetic and real point cloud images), by implementing deep learning models that learn from synthetic 3D point cloud (CAD model images) and perform well on the actual 3D point cloud (3D Camera images). Methods. Through this course of thesis project, various methods for understand- ing the data and analyzing it for bridging the gap between CAD and CAM to make them similar is looked into. Literature review and controlled experiment are research methodologies followed during implementation. In this project, we experiment with four different deep learning models with data generated and compare their performance to know which deep learning model performs best for the data. Results. The results are explained through metrics i.e, accuracy and train time, which were the outcomes of each of the deep learning models after the experiment. These metrics are illustrated in the form of graphs for comparative analysis between the models on which the data is trained and tested on. PointDAN showed better results with higher accuracy compared to the other 3 models. Conclusions. The results attained show that domain adaptation for synthetic images to real images is possible with the data generated. PointDAN deep learning model which focuses on local feature alignment and global feature alignment with single-view point data shows better results with our data.
APA, Harvard, Vancouver, ISO, and other styles
3

VALE, EDUARDO ESTEVES. "ENHANCEMENT OF IMAGES IN THE TRANSFORM DOMAIN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2006. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8237@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta Dissertação destina-se ao desenvolvimento de novas técnicas de realce aplicadas no domínio da transformada. O estudo das transformadas bidimensionais motivaram o desenvolvimento de técnicas baseadas nestas ferramentas matemáticas. Análises comparativas entre os métodos de realce no domínio espacial e no domínio da transformada logo revelaram as vantagens do uso das transformadas. É proposta e analisada uma nova técnica de realce no domínio da Transformada Cosseno Discreta (DCT). Os resultados mostraram que esta nova proposta é menos afetada por ruído e realça mais a imagem que as técnicas apresentadas na literatura. Adicionalmente, considera-se uma estratégia com o objetivo de eliminar o efeito de escurecimento da imagem processada pelo Alpha-rooting. É também apresentada uma nova proposta de realce no domínio da Transformada Wavelet Discreta (DWT). As simulações mostraram que a imagem resultante possui melhor qualidade visual que a de técnicas relatadas na literatura, além de ser pouco afetada pelo ruído. Além disso, a escolha do parâmetro de realce é simplificada.
This Dissertation is aimed at the development of new enhancement techniques applied in the transform domain. The study of the bidimensional transforms motivated the development of techniques based on these mathematical tools. The comparative analysis between the enhancement methods in the spatial domain and in the transform domain revealed the advantages of the use of transforms. A new proposal of enhancement in the Discrete Cosine Transform (DCT) domain is analysed. The results showed that this new proposal is less affected by noise and enhances more the image than other techniques reported in the literature. In addition, a strategy to eliminate the darkening effect of enhancement by Alpha-rooting is considered. A new proposal of enhancement in the Discrete Wavelet Transform (DWT) domain is also presented. Simulation results showed that the enhanced images have better visual quality than other ones presented in the literature and is less affected by noise. Moreover, the choice of the enhancement parameter is simplified.
APA, Harvard, Vancouver, ISO, and other styles
4

Grahn, Fredrik, and Kristian Nilsson. "Object Detection in Domain Specific Stereo-Analysed Satellite Images." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159917.

Full text
Abstract:
Given satellite images with accompanying pixel classifications and elevation data, we propose different solutions to object detection. The first method uses hierarchical clustering for segmentation and then employs different methods of classification. One of these classification methods used domain knowledge to classify objects while the other used Support Vector Machines. Additionally, a combination of three Support Vector Machines were used in a hierarchical structure which out-performed the regular Support Vector Machine method in most of the evaluation metrics. The second approach is more conventional with different types of Convolutional Neural Networks. A segmentation network was used as well as a few detection networks and different fusions between these. The Convolutional Neural Network approach proved to be the better of the two in terms of precision and recall but the clustering approach was not far behind. This work was done using a relatively small amount of data which potentially could have impacted the results of the Machine Learning models in a negative way.
APA, Harvard, Vancouver, ISO, and other styles
5

Soukal, David. "Advanced steganographic and steganalytic methods in the spatial domain." Diss., Online access via UMI:, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Naraharisetti, Sahasan Mohanty Saraju. "Region aware DCT domain invisible robust blind watermarking for color images." [Denton, Tex.] : University of North Texas, 2008. http://digital.library.unt.edu/permalink/meta-dc-9748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Naraharisetti, Sahasan. "Region aware DCT domain invisible robust blind watermarking for color images." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9748/.

Full text
Abstract:
The multimedia revolution has made a strong impact on our society. The explosive growth of the Internet, the access to this digital information generates new opportunities and challenges. The ease of editing and duplication in digital domain created the concern of copyright protection for content providers. Various schemes to embed secondary data in the digital media are investigated to preserve copyright and to discourage unauthorized duplication: where digital watermarking is a viable solution. This thesis proposes a novel invisible watermarking scheme: a discrete cosine transform (DCT) domain based watermark embedding and blind extraction algorithm for copyright protection of the color images. Testing of the proposed watermarking scheme's robustness and security via different benchmarks proves its resilience to digital attacks. The detectors response, PSNR and RMSE results show that our algorithm has a better security performance than most of the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Chakravarthy, Chinna Narayana Swamy Thrilok. "Combinational Watermarking for Medical Images." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5833.

Full text
Abstract:
Digitization of medical data has become a very important part of the modern healthcare system. Data can be transmitted easily at any time to anywhere in the world using Internet to get the best diagnosis possible for a patient. This digitized medical data must be protected at all times to preserve the doctor-patient confidentiality. Watermarking can be used as an effective tool to achieve this. In this research project, image watermarking is performed both in the spatial domain and the frequency domain to embed a shared image with medical image data and the patient data which would include the patient identification number. For the proposed system, Structural Similarity (SSIM) is used as an index to measure the quality of the watermarking process instead of Peak Signal to Noise Ratio (PSNR) since SSIM takes into account the visual perception of the images compared to PSNR which uses the intensity levels to measure the quality of the watermarking process. The system response under ideal conditions as well as under the influence of noise were measured and the results were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
9

Tasar, Onur. "Des images satellites aux cartes vectorielles." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4063.

Full text
Abstract:
Grâce à d'importants développements technologiques au fil des ans, il a été possible de collecter des quantités massives de données de télédétection. Par exemple, les constellations de divers satellites sont capables de capturer de grandes quantités d'images de télédétection à haute résolution spatiale ainsi que de riches informations spectrales sur tout le globe. La disponibilité de données aussi gigantesques a ouvert la porte à de nombreuses applications et a soulevé de nombreux défis scientifiques. Parmi ces défis, la génération automatique de cartes précises est devenue l'un des problèmes les plus intéressants et les plus anciens, car il s'agit d'un processus crucial pour un large éventail d'applications dans des domaines tels que la surveillance et l'aménagement urbains, l'agriculture de précision, la conduite autonome et la navigation.Cette thèse vise à développer de nouvelles approches pour générer des cartes vectorielles à partir d'images de télédétection. À cette fin, nous avons divisé la tâche en deux sous-étapes. La première étape consiste à générer des cartes matricielles à partir d'images de télédétection en effectuant une classification au niveau des pixels grâce à des techniques avancées d'apprentissage profond. La seconde étape vise à convertir les cartes matricielles en cartes vectorielles en utilisant des structures de données et des algorithmes de géométrie algorithmique. Cette thèse aborde les défis qui sont couramment rencontrés au cours de ces deux étapes. Bien que des recherches antérieures aient montré que les réseaux neuronaux convolutifs (CNN) sont capables de générer d'excellentes cartes lorsque les données d'entraînement sont représentatives des données d'essai, leurs performances diminuent considérablement lorsqu'il existe une grande différence de distribution entre les images d'entraînement et d'essai. Dans la première étape de notre traitement, nous visons principalement à surmonter les capacités de généralisation limitées des CNN pour effectuer une classification à grande échelle. Nous explorons également un moyen d'exploiter de multiples ensembles de données collectées à différentes époques avec des annotations pour des classes distinctes afin de former des CNN capables de générer des cartes pour toutes les classes.Dans la deuxième partie, nous décrivons une méthode qui vectorise les cartes matricielles pour les intégrer dans des applications de systèmes d'information géographique, ce qui complète notre chaîne de traitement. Tout au long de cette thèse, nous expérimentons sur un grand nombre d'images satellitaires et aériennes de très haute résolution. Nos expériences démontrent la robustesse et la capacité à généraliser des méthodes proposées
With the help of significant technological developments over the years, it has been possible to collect massive amounts of remote sensing data. For example, the constellations of various satellites are able to capture large amounts of remote sensing images with high spatial resolution as well as rich spectral information over the globe. The availability of such huge volume of data has opened the door to numerous applications and raised many challenges. Among these challenges, automatically generating accurate maps has become one of the most interesting and long-standing problems, since it is a crucial process for a wide range of applications in domains such as urban monitoring and management, precise agriculture, autonomous driving, and navigation.This thesis seeks for developing novel approaches to generate vector maps from remote sensing images. To this end, we split the task into two sub-stages. The former stage consists in generating raster maps from remote sensing images by performing pixel-wise classification using advanced deep learning techniques. The latter stage aims at converting raster maps to vector ones by leveraging computational geometry approaches. This thesis addresses the challenges that are commonly encountered within both stages. Although previous research has shown that convolutional neural networks (CNNs) are able to generate excellent maps when training data are representative for test data, their performance significantly drops when there exists a large distribution difference between training and test images. In the first stage of our pipeline, we mainly aim at overcoming limited generalization abilities of CNNs to perform large-scale classification. We also explore a way of leveraging multiple data sets collected at different times with annotations for separate classes to train CNNs that can generate maps for all the classes.In the second part, we propose a method that vectorizes raster maps to integrate them into geographic information systems applications, which completes our processing pipeline. Throughout this thesis, we experiment on a large number of very high resolution satellite and aerial images. Our experiments demonstrate robustness and scalability of the proposed methods
APA, Harvard, Vancouver, ISO, and other styles
10

Mohamed, Aamer S. S. "From content-based to semantic image retrieval. Low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4438.

Full text
Abstract:
Digital image archiving urgently requires advanced techniques for more efficient storage and retrieval methods because of the increasing amount of digital. Although JPEG supply systems to compress image data efficiently, the problems of how to organize the image database structure for efficient indexing and retrieval, how to index and retrieve image data from DCT compressed domain and how to interpret image data semantically are major obstacles for further development of digital image database system. In content-based image, image analysis is the primary step to extract useful information from image databases. The difficulty in content-based image retrieval is how to summarize the low-level features into high-level or semantic descriptors to facilitate the retrieval procedure. Such a shift toward a semantic visual data learning or detection of semantic objects generates an urgent need to link the low level features with semantic understanding of the observed visual information. To solve such a -semantic gap¿ problem, an efficient way is to develop a number of classifiers to identify the presence of semantic image components that can be connected to semantic descriptors. Among various semantic objects, the human face is a very important example, which is usually also the most significant element in many images and photos. The presence of faces can usually be correlated to specific scenes with semantic inference according to a given ontology. Therefore, face detection can be an efficient tool to annotate images for semantic descriptors. In this thesis, a paradigm to process, analyze and interpret digital images is proposed. In order to speed up access to desired images, after accessing image data, image features are presented for analysis. This analysis gives not only a structure for content-based image retrieval but also the basic units ii for high-level semantic image interpretation. Finally, images are interpreted and classified into some semantic categories by semantic object detection categorization algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Jian. "A Study of Embedded Gradient Domain Tone Mapping Operators for High Dynamic Range Images." University of Akron / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=akron1384593667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mohamed, Aamer Saleh Sahel. "From content-based to semantic image retrieval : low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4438.

Full text
Abstract:
Digital image archiving urgently requires advanced techniques for more efficient storage and retrieval methods because of the increasing amount of digital. Although JPEG supply systems to compress image data efficiently, the problems of how to organize the image database structure for efficient indexing and retrieval, how to index and retrieve image data from DCT compressed domain and how to interpret image data semantically are major obstacles for further development of digital image database system. In content-based image, image analysis is the primary step to extract useful information from image databases. The difficulty in content-based image retrieval is how to summarize the low-level features into high-level or semantic descriptors to facilitate the retrieval procedure. Such a shift toward a semantic visual data learning or detection of semantic objects generates an urgent need to link the low level features with semantic understanding of the observed visual information. To solve such a 'semantic gap' problem, an efficient way is to develop a number of classifiers to identify the presence of semantic image components that can be connected to semantic descriptors. Among various semantic objects, the human face is a very important example, which is usually also the most significant element in many images and photos. The presence of faces can usually be correlated to specific scenes with semantic inference according to a given ontology. Therefore, face detection can be an efficient tool to annotate images for semantic descriptors. In this thesis, a paradigm to process, analyze and interpret digital images is proposed. In order to speed up access to desired images, after accessing image data, image features are presented for analysis. This analysis gives not only a structure for content-based image retrieval but also the basic units ii for high-level semantic image interpretation. Finally, images are interpreted and classified into some semantic categories by semantic object detection categorization algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Caye, Daudt Rodrigo. "Convolutional neural networks for change analysis in earth observation images with noisy labels and domain shifts." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT033.

Full text
Abstract:
L'analyse de l'imagerie satellitaire et aérienne d'observation de la Terre nous permet d'obtenir des informations précises sur de vastes zones. Une analyse multitemporelle de telles images est nécessaire pour comprendre l'évolution de ces zones. Dans cette thèse, les réseaux de neurones convolutifs sont utilisés pour détecter et comprendre les changements en utilisant des images de télédétection provenant de diverses sources de manière supervisée et faiblement supervisée. Des architectures siamoises sont utilisées pour comparer des paires d'images recalées et identifier les pixels correspondant à des changements. La méthode proposée est ensuite étendue à une architecture de réseau multitâche qui est utilisée pour détecter les changements et effectuer une cartographie automatique simultanément, ce qui permet une compréhension sémantique des changements détectés. Ensuite, un filtrage de classification et un nouvel algorithme de diffusion anisotrope guidée sont utilisés pour réduire l'effet du bruit d'annotation, un défaut récurrent pour les ensembles de données à grande échelle générés automatiquement. Un apprentissage faiblement supervisé est également réalisé pour effectuer une détection de changement au niveau des pixels en utilisant uniquement une supervision au niveau de l'image grâce à l'utilisation de cartes d'activation de classe et d'une nouvelle couche d'attention spatiale. Enfin, une méthode d'adaptation de domaine fondée sur un entraînement adverse est proposée. Cette méthode permet de projeter des images de différents domaines dans un espace latent commun où une tâche donnée peut être effectuée. Cette méthode est testée non seulement pour l'adaptation de domaine pour la détection de changement, mais aussi pour la classification d'images et la segmentation sémantique, ce qui prouve sa polyvalence
The analysis of satellite and aerial Earth observation images allows us to obtain precise information over large areas. A multitemporal analysis of such images is necessary to understand the evolution of such areas. In this thesis, convolutional neural networks are used to detect and understand changes using remote sensing images from various sources in supervised and weakly supervised settings. Siamese architectures are used to compare coregistered image pairs and to identify changed pixels. The proposed method is then extended into a multitask network architecture that is used to detect changes and perform land cover mapping simultaneously, which permits a semantic understanding of the detected changes. Then, classification filtering and a novel guided anisotropic diffusion algorithm are used to reduce the effect of biased label noise, which is a concern for automatically generated large-scale datasets. Weakly supervised learning is also achieved to perform pixel-level change detection using only image-level supervision through the usage of class activation maps and a novel spatial attention layer. Finally, a domain adaptation method based on adversarial training is proposed, which succeeds in projecting images from different domains into a common latent space where a given task can be performed. This method is tested not only for domain adaptation for change detection, but also for image classification and semantic segmentation, which proves its versatility
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Miaomiao. "Fourier-based reconstruction of ultrafast sectorial images in ultrasound." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI144/document.

Full text
Abstract:
L'échocardiographie est une modalité d'imagerie sûre, non-invasive, qui est utilisée pour évaluer la fonction et l'anatomie cardiaque en routine clinique. Mais la cadence maximale d’imagerie atteinte est limitée en raison de la vitesse limitée du son. Afin d’augmenter la fréquence d'image, l'utilisation d’ondes planes ou d’ondes divergentes en transmissinon a été proposée afin de réduire le nombre de tirs nécessaires à la reconstruction d'une image. L'objectif de cette thèse consiste à développer un procédé d'imagerie par ultrasons ultra-rapide en échocardiographie 2/3D basé sur une insonification par ondes divergentes et réalisant une reconstruction dans le domaine de Fourier. Les contributions principales obtenues au cours de la thèse sont décrites ci-dessous. La première contribution de cette thèse concerne un schéma de transmission dichotomique pour l'acquisition linéaire en analysant mathématiquement la pression générée. Nous avons ensuite montré que ce système de transmission peut améliorer la qualité des images reconstruites pour une cadence constante en utilisant les algorithmes de reconstruction conventionnels. La qualité des images reconstruites a été évaluée en termes de résolution et de contraste au moyen de simulations et acquisitions expérimentales réalisées sur des fantômes. La deuxième contribution concerne le développement d'une nouvelle méthode d'imagerie 2D en ondes plane opérant dans le domaine de Fourier et basée sur le théorème de la coupe centrale. Les résultats que nous avons obtenus montrent que l'approche proposée fournit des résultats très proches de ceux fournit par les méthodes classiques en termes de résolution latérale et contraste de l'image. La troisième contribution concerne le développement d'une transformation spatiale explicite permettant d'étendre les méthodes 2D opérant dans le domaine de Fourier d'une acquisition en géométrie linéaire avec des ondes planes à la géométrie sectorielle avec des ondes divergente en transmission. Les résultats que nous avons obtenus à partir de simulations et d'acquisitions expérimentales in vivo montrent que l'application de cette extension à la méthode de Lu permet d'obtenir la même qualité d’image que la méthode spatiale de Papadacci basée sur des ondes divergentes, mais avec une complexité de calcul plus faible. Finalement, la formulation proposée en 2D pour les méthodes ultra-rapides opérant dans le domaine de Fourier ont été étendues en 3D. L'approche proposée donne des résultats compétitifs associés à une complexité de calcul beaucoup plus faible par rapport à la technique de retard et somme conventionnelle
Three-dimensional echocardiography is one of the most widely used modality in real time heart imaging thanks to its noninvasive and low cost. However, the real-time property is limited because of the limited speed of sound. To increase the frame rate, plane wave and diverging wave in transmission have been proposed to drastically reduce the number of transmissions to reconstruct one image. In this thesis, starting with the 2D plane wave imaging methods, the reconstruction of 2D/3D echocardiographic sequences in Fourier domain using diverging waves is addressed. The main contributions are as follows: The first contribution concerns the study of the influence of transmission scheme in the context of 2D plane wave imaging. A dichotomous transmission scheme was proposed. Results show that the proposed scheme allows the improvement of the quality of the reconstructed B-mode images at a constant frame rate. Then we proposed an alternative Fourier-based plane wave imaging method (i.e. Ultrasound Fourier Slice Beamforming). The proposed method was assessed using numerical simulations and experiments. Results revealed that the method produces very competitive image quality compared to the state-of-the-art methods. The third contribution concerns the extension of Fourier-based plane wave imaging methods to sectorial imaging in 2D. We derived an explicit spatial transformation which allows the extension of the current Fourier-based plane wave imaging techniques to the reconstruction of sectorial scan using diverging waves. Results obtained from simulations and experiments show that the derived methods produce competitive results with lower computational complexity when compared to the conventional delay and sum (DAS) technique. Finally, the 2D Fourier-based diverging wave imaging methods are extended to 3D. Numerical simulations were performed to evaluate the proposed method. Results show that the proposed approach provides competitive scores in terms of image quality compared to the DAS technique, but with a much lower computational complexity
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Yubing. "Analyse de vitesse par migration quantitative dans les domaines images et données pour l’imagerie sismique." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM002/document.

Full text
Abstract:
Les expériences sismiques actives sont largement utilisées pour caractériser la structure de la subsurface. Les méthodes dites d’analyse de vitesse par migration ont pour but la détermination d’un macro-modèle de vitesse, lisse, et contrôlant la cinématique de propagation des ondes. Le modèle est estimé par des critères de cohérence d’image ou de focalisation d’image. Les images de réflectivité obtenues par les techniques de migration classiques sont cependant contaminées par des artefacts, altérant la qualité de la remise à jour du macro-modèle. Des résultats récents proposent de coupler l’inversion asymptotique, qui donne des images beaucoup plus propres en pratique, avec l’analyse de vitesse pour la version offset en profondeur. Cette approche cependant demande des capacités de calcul et de mémoire importantes et ne peut actuellement être étendue en 3D.Dans ce travail, je propose de développer le couplage entre l’analyse de vitesse et la migration plus conventionnelle par point de tir. La nouvelle approche permet de prendre en compte des modèles de vitesse complexes, comme par exemple en présence d’anomalies de vitesses plus lentes ou de réflectivités discontinues. C’est une alternative avantageuse en termes d’implémentation et de coût numérique par rapport à la version profondeur. Je propose aussi d’étendre l’analyse de vitesse par inversion au domaine des données pour les cas par point de tir. J’établis un lien entre les méthodes formulées dans les domaines données et images. Les méthodologies sont développées et analysées sur des données synthétiques 2D
Active seismic experiments are widely used to characterize the structure of the subsurface. Migration Velocity Analysis techniques aim at recovering the background velocity model controlling the kinematics of wave propagation. The first step consists of obtaining the reflectivity images by migrating observed data in a given macro velocity model. The estimated model is then updated, assessing the quality of the background velocity model through the image coherency or focusing criteria. Classical migration techniques, however, do not provide a sufficiently accurate reflectivity image, leading to incorrect velocity updates. Recent investigations propose to couple the asymptotic inversion, which can remove migration artifacts in practice, to velocity analysis in the subsurface-offset domain for better robustness. This approach requires large memory and cannot be currently extended to 3D. In this thesis, I propose to transpose the strategy to the more conventional common-shot migration based velocity analysis. I analyze how the approach can deal with complex models, in particular with the presence of low velocity anomaly zones or discontinuous reflectivities. Additionally, it requires less memory than its counterpart in the subsurface-offset domain. I also propose to extend Inversion Velocity Analysis to the data-domain, leading to a more linearized inverse problem than classic waveform inversion. I establish formal links between data-fitting principle and image coherency criteria by comparing the new approach to other reflection-based waveform inversion techniques. The methodologies are developed and analyzed on 2D synthetic data sets
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Yunqin. "SPECTRAL CALIBRATION FOR SPECTRAL DOMAIN OPTICAL COHERENCE TOMOGRAPHY BASED ON B-SCAN DOPPLER SHIFT WITH IN SITU. TISSUE IMAGES." Miami University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=miami1562594660964602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Alqasir, Hiba. "Deep Learning for Chairlift Scene Analysis : Boosting Generalization in Multi-Domain Context." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSES045.

Full text
Abstract:
Nous présentons notre travail sur la sécurité des télésièges par des techniques d'apprentissage profond dans le cadre du projet Mivao, qui vise à développer un système de vision par ordinateur qui acquiert des images de la station d'embarquement du télésiège, analyse les éléments essentiels et détecte les situations dangereuses. Dans ce scénario, nous avons différents télésièges répartis sur différentes stations de ski, avec une grande diversité de conditions d'acquisition et de géométries . Lorsque le système est installé pour un nouveau télésiège, l'objectif est d'effectuer une analyse de scène précise et fiable, étant donné le manque de données labellisées sur ce télésiège.Dans ce contexte, nous nous concentrons principalement sur le garde-corps du télésiège et proposons de classer chaque image en deux catégories, selon que le garde-corps est fermé ou ouvert. Il s'agit donc d'un problème de classification des images avec trois spécificités : (i) la catégorie d'image dépend d'un petit détail dans un fond encombré, (ii) les annotations manuelles ne sont pas faciles à obtenir, (iii) un classificateur formé sur certains télésièges devrait donner de bons résultats sur un nouveau. Pour guider le classificateur vers les zones importantes des images, nous avons proposé deux solutions : la détection d'objets et les réseaux siamois.Nos solutions sont motivées par la nécessité de minimiser les efforts d'annotation humaine tout en améliorant la précision du problème de la sécurité des télésièges. Cependant, ces contributions ne sont pas nécessairement limitées à ce contexte spécifique, et elles peuvent être appliquées à d'autres problèmes dans un contexte multi-domaine
This thesis presents our work on chairlift safety using deep learning techniques as part of the Mivao project, which aims to develop a computer vision system that acquires images of the chairlift boarding station, analyzes the crucial elements, and detects dangerous situations. In this scenario, we have different chairlifts spread over different ski resorts, with a high diversity of acquisition conditions and geometries; thus, each chairlift is considered a domain. When the system is installed for a new chairlift, the objective is to perform an accurate and reliable scene analysis, given the lack of labeled data on this new domain (chairlift).In this context, we mainly concentrate on the chairlift safety bar and propose to classify each image into two categories, depending on whether the safety bar is closed (safe) or open (unsafe). Thus, it is an image classification problem with three specific features: (i) the image category depends on a small detail (the safety bar) in a cluttered background, (ii) manual annotations are not easy to obtain, (iii) a classifier trained on some chairlifts should provide good results on a new one (generalization). To guide the classifier towards the important regions of the images, we have proposed two solutions: object detection and Siamese networks. Furthermore, we analyzed the generalization property of these two approaches. Our solutions are motivated by the need to minimize human annotation efforts while improving the accuracy of the chairlift safety problem. However, these contributions are not necessarily limited to this specific application context, and they may be applied to other problems in a multi-domain context
APA, Harvard, Vancouver, ISO, and other styles
18

Ben, Ahmed Olfa. "Features-based MRI brain classification with domain knowledge : application to Alzheimer's disease diagnosis." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0002/document.

Full text
Abstract:
Les outils méthodologiques en indexation et classification des images par le contenu sont déjà assez matures et ce domaine s’ouvre vers les applications médicales. Dans cette thèse,nous nous intéressons à l'indexation visuelle, à la recherche et à la classification des images cérébrales IRM par le contenu pour l'aide au diagnostic de la maladie d'Alzheimer (MA). L'idée principale est de donner au clinicien des informations sur les images ayant des caractéristiques visuelles similaires. Trois catégories de sujets sont à distinguer: sujets sains (NC), sujets à troubles cognitifs légers (MCI) et sujets atteints par la maladie d'Alzheimer(AD). Nous représentons l’atrophie cérébrale comme une variation de signal dans des images IRM (IRM structurelle et IRM de Tenseur de Diffusion). Cette tâche n'est pas triviale,alors nous nous sommes concentrés uniquement sur l’extraction des caractéristiques à partir des régions impliquées dans la maladie d'Alzheimer et qui causent des changements particuliers dans la structure de cerveau : l'hippocampe le Cortex Cingulaire Postérieur. Les primitifs extrais sont quantifiés en utilisant l'approche sac de mots visuels. Cela permet de représenter l’atrophie cérébrale sous forme d’une signature visuelle spécifique à la MA.Plusieurs stratégies de fusion d’information sont appliquées pour renforcer les performances de système d’aide au diagnostic. La méthode proposée est automatique (sans l’intervention de clinicien), ne nécessite pas une étape de segmentation grâce à l'utilisation d'un Atlas normalisé. Les résultats obtenus apportent une amélioration par rapport aux méthodes de l’état de l’art en termes de précision de classification et de temps de traitement
Content-Based Visual Information Retrieval and Classification on Magnetic Resonance Imaging (MRI) is penetrating the universe of IT tools supporting clinical decision making. A clinician can take profit from retrieving subject’s scans with similar patterns. In this thesis, we use the visual indexing framework and pattern recognition analysis based on structural MRIand Tensor Diffusion Imaging (DTI) data to discriminate three categories of subjects: Normal Controls (NC), Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD). The approach extracts visual features from the most involved areas in the disease: Hippocampusand Posterior Cingulate Cortex. Hence, we represent signal variations (atrophy) inside the Region of Interest anatomy by a set of local features and we build a disease-related signature using an atlas based parcellation of the brain scan. The extracted features are quantized using the Bag-of-Visual-Words approach to build one signature by brain/ROI(subject). This yields a transformation of a full MRI brain into a compact disease-related signature. Several schemes of information fusion are applied to enhance the diagnosis performance. The proposed approach is less time-consuming compared to the state of thearts methods, computer-based and does not require the intervention of an expert during the classification/retrieval phase
APA, Harvard, Vancouver, ISO, and other styles
19

Santos, Andre Ynada dos [UNESP]. "Referentes e tendências teóricas sobre análise e representação de imagem na ISKO: uma análise de domínio." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/154274.

Full text
Abstract:
Submitted by André Ynada dos Santos (andre.ynada@gmail.com) on 2018-06-15T19:32:21Z No. of bitstreams: 1 André Ynada dos santos _ dissertacao _ final (4) impressão.pdf: 1375963 bytes, checksum: 58cfc6a13ec9745a09db5dc67982ca28 (MD5)
Approved for entry into archive by Satie Tagara (satie@marilia.unesp.br) on 2018-06-18T13:25:49Z (GMT) No. of bitstreams: 1 santos_ay_me_mar.pdf: 1375963 bytes, checksum: 58cfc6a13ec9745a09db5dc67982ca28 (MD5)
Made available in DSpace on 2018-06-18T13:25:49Z (GMT). No. of bitstreams: 1 santos_ay_me_mar.pdf: 1375963 bytes, checksum: 58cfc6a13ec9745a09db5dc67982ca28 (MD5) Previous issue date: 2018-05-18
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Considerando a crescente importância da imagem como elemento de informação, o que pressupõe o desenvolvimento de abordagens cada vez mais acuradas de organização e de representação, no presente trabalho busca-se identificar as formas pelas quais a organização e representação da informação imagética tem sido discutida no âmbito da International Society for Knowledge Organization (ISKO) com o objetivo de verificar quais são os referenciais e as tendências teóricas da análise e da representação da imagem no universo da organização do conhecimento em nível internacional. Para tanto, será analisada a literatura científica oficial da ISKO por meio dos anais dos congressos internacionais e da revista Knowledge Organization, entre os anos de 1990 a 2015. Nesse sentido, serão selecionados os artigos que apresentem o(s) termo(s) image*, picture*, photo*, film*, movie* no título e/ou no resumo, a partir do que se procederá à análise de domínio, a partir das abordagens bibliométrica e epistemológica propostas por Hjørland (2002).
Considering the increasing importance of image as an element of information, which presuposes the development of increasingly accurate approaches to organization and representation, the present work aims to identify the ways in which the organization and representation of imagery information has been discussed in the International Society for Knowledge Organization (ISKO) with the objective of verifying the theoretical references and trends of image analysis and representation in the universe of knowledge organization at the international level. In order to achieve this, the official scientific literature of the ISKO is analyzed including the proceedings of the international conferences and the Knowledge Organization journal, from 1990 to 2015. In this sense, I analyzed the articles that present the term (s) image *, picture *, photo *, film *, movie * in the title and/or abstract using domain analysis in the bibliometric and epistemological approaches proposed by Hjørland (2002).
APA, Harvard, Vancouver, ISO, and other styles
20

Piretti, Mattia. "Synthetic DNA as a novel data storage solution for digital images." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/22028/.

Full text
Abstract:
During the digital revolution there has been an explosion in the amount of data produced by humanity and the capacity of conventional storage devices has been struggling to keep up with this aggressive growth. This has highlighted the need for new means to store digital information, especially cold data. In this dissertations we will build upon the work done by the I3S MediaCoding team on utilizing DNA as a novel storage medium,thanks to its incredibly high information density and effectively infinite shelf life. We will expand on their previous works and adapt them to operate on the Nanopore MinIONsequencer, by increasing the noise resistance during the decoding process, and adding a post-processing step to repair the damage caused by the sequencing noise.
APA, Harvard, Vancouver, ISO, and other styles
21

Rousselle, Denis. "Classification d’objets au moyen de machines à vecteurs supports dans les images de sonar de haute résolution du fond marin." Thesis, Rouen, INSA, 2016. http://www.theses.fr/2016ISAM0020.

Full text
Abstract:
Cette thèse a pour objectif d'améliorer la classification d'objets sous-marins dans des images sonar haute résolution. En particulier, il s'agit de distinguer les mines des objets inoffensifs parmi une collection d'objets ressemblant à des mines. Nos recherches ont été dirigées par deux contraintes classiques en guerre de la mine : d'une part, le manque de données et d'autre part, le besoin de lisibilité des décisions. Nous avons donc constitué une base de données la plus représentative possible et simulé des objets dans le but de la compléter. Le manque d'exemples nous a mené à utiliser une représentation compacte, issue de la reconnaissance de visages : les Structural Binary Gradient Patterns (SBGP). Dans la même optique, nous avons dérivé une méthode d'adaptation de domaine semi-supervisée, basée sur le transport optimal, qui peut être facilement interprétable. Enfin, nous avons développé un nouvel algorithme de classification : les Ensemble of Exemplar-Maximum Excluding Ball (EE-MEB) qui sont à la fois adaptés à des petits jeux de données mais dont la décision est également aisément analysable
This thesis aims to improve the classification of underwater objects in high resolution sonar images. Especially, we seek to make the distinction between mines and harmless objects from a collection of mine-like objects. Our research was led by two classical constraints of the mine warfare : firstly, the lack of data and secondly, the need for readability of the classification. In this context, we built a database as much representative as possible and simulated objects in order to complete it. The lack of examples led us to use a compact representation, originally used by the face recognition community : the Structural Binary Gradient Patterns (SBGP). To the same end, we derived a method of semi-supervised domain adaptation, based on optimal transport, that can be easily interpreted. Finally, we developed a new classification algorithm : the Ensemble of Exemplar-Maximum Excluding Ball (EE-MEB) which is suitable for small datasets and with an easily interpretable decision function
APA, Harvard, Vancouver, ISO, and other styles
22

Al-Nu'aimi, Abdallah S. N. A. "Design, Implementation and Performance Evaluation of Robust and Secure Watermarking Techniques for Digital Coloured Images. Designing new adaptive and robust imaging techniques for embedding and extracting 2D watermarks in the spatial and transform domain using imaging and signal processing techniques." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4255.

Full text
Abstract:
The tremendous spreading of multimedia via Internet motivates the watermarking as a new promising technology for copyright protection. This work is concerned with the design and development of novel algorithms in the spatial and transform domains for robust and secure watermarking of coloured images. These algorithms are adaptive, content-dependent and compatible with the Human Visual System (HVS). The host channels have the ability to host a large information payload. Furthermore, it has enough capacity to accept multiple watermarks. Abstract This work achieves several contributions in the area of coloured images watermarking. The most challenging problem is to get a robust algorithm that can overcome geometric attacks, which is solved in this work. Also, the search for a very secure algorithm has been achieved via using double secret keys. In addition, the problem of multiple claims of ownership is solved here using an unusual approach. Furthermore, this work differentiates between terms, which are usually confusing the researchers and lead to misunderstanding in most of the previous algorithms. One of the drawbacks in most of the previous algorithms is that the watermark consists of a small numbers of bits without strict meaning. This work overcomes this weakness III in using meaningful images and text with large amounts of data. Contrary to what is found in literature, this work shows that the green-channel is better than the blue-channel to host the watermarks. A more general and comprehensive test bed besides a broad band of performance evaluation is used to fairly judge the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Al-Nu'aimi, Abdallah Saleem Na. "Design, implementation and performance evaluation of robust and secure watermarking techniques for digital coloured images : designing new adaptive and robust imaging techniques for embedding and extracting 2D watermarks in the spatial and transform domain using imaging and signal processing techniques." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4255.

Full text
Abstract:
The tremendous spreading of multimedia via Internet motivates the watermarking as a new promising technology for copyright protection. This work is concerned with the design and development of novel algorithms in the spatial and transform domains for robust and secure watermarking of coloured images. These algorithms are adaptive, content-dependent and compatible with the Human Visual System (HVS). The host channels have the ability to host a large information payload. Furthermore, it has enough capacity to accept multiple watermarks. Abstract This work achieves several contributions in the area of coloured images watermarking. The most challenging problem is to get a robust algorithm that can overcome geometric attacks, which is solved in this work. Also, the search for a very secure algorithm has been achieved via using double secret keys. In addition, the problem of multiple claims of ownership is solved here using an unusual approach. Furthermore, this work differentiates between terms, which are usually confusing the researchers and lead to misunderstanding in most of the previous algorithms. One of the drawbacks in most of the previous algorithms is that the watermark consists of a small numbers of bits without strict meaning. This work overcomes this weakness III in using meaningful images and text with large amounts of data. Contrary to what is found in literature, this work shows that the green-channel is better than the blue-channel to host the watermarks. A more general and comprehensive test bed besides a broad band of performance evaluation is used to fairly judge the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
24

Lundqvist, Melvin, and Agnes Forsberg. "A comparison of OCR methods on natural images in different image domains." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280286.

Full text
Abstract:
Optical character recognition (OCR) is a blanket term for methods that convert printed or handwritten text into machine-encoded text. As the digital world keeps growing the amount of digital images with text increases, and the need for OCR methods that can handle more than plain text documents as well. There are OCR engines that can convert images of clean documents with an over 99% recognition rate. OCR for natural images is getting more and more attention, but because natural images can be far more diverse than plain text documents it also leads to complications. To combat these issues it needs to be clear in what areas the OCR methods of today struggle. This thesis aims to answer this by testing three popular, readily available, OCR methods on a dataset comprised only of natural images containing text. The results show that one of the methods, GOCR, can not handle natural images as its test results were very far from correct. For the other two methods, ABBYY FineReader and Tesseract, the results were better but also show that there still is a long way to go, especially when it comes to images with special font. However when the images are less complicated some of our methods performed above our expectations.
Optical character recognition (OCR) är en samlingsterm för metoder som konverterar tryckt eller handskriven text till maskinkod. När den digitala världen växer så växer även antalet digitala bilder med text, och även behovet för OCR metoder som kan hantera mer än vanliga textdokument. Det finns idag OCR motorer som kan konvertera bilder av rena dokument till maskinkod med över 99% korrekthet. OCR för fotografier får mer och mer uppmärksamhet, men eftersom fotografier har mycket större mångfaldhet än rena textdokument leder detta också till problem. För att hantera detta krävs klarhet inom vilka områden som dagens OCR-metoder har problem. Denna uppsats ämnar svara på denna fråga genom att undersöka och testa tre populära, enkelt tillgängliga OCR metoder på ett dataset som endast innehåller fotografier av naturliga miljöer med text. Resultaten visade att en av metoderna, GOCR, inte kan hantera fotografier. GOCRs testresultat var långt från det korrekta. För de andra metoderna, ABBYY FineReader och Tesseract, var resultaten bättre men visade att det fortfarande finns mycket arbete att göra inom området, särskilt när det kommer till bilder med speciella typsnitt. När det däremot kommer till bilder som är mindre komplicerade blev vi förvånade över hur bra resultatet var för några av metoderna.
APA, Harvard, Vancouver, ISO, and other styles
25

Azimifar, Seyedeh-Zohreh. "Image Models for Wavelet Domain Statistics." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/938.

Full text
Abstract:
Statistical models for the joint statistics of image pixels are of central importance in many image processing applications. However the high dimensionality stemming from large problem size and the long-range spatial interactions make statistical image modeling particularly challenging. Commonly this modeling is simplified by a change of basis, mostly using a wavelet transform. Indeed, the wavelet transform has widely been used as an approximate whitener of statistical time series. It has, however, long been recognized that the wavelet coefficients are neither Gaussian, in terms of the marginal statistics, nor white, in terms of the joint statistics. The question of wavelet joint models is complicated and admits for possibilities, with statistical structures within subbands, across orientations, and scales. Although a variety of joint models have been proposed and tested, few models appear to be directly based on empirical studies of wavelet coefficient cross-statistics. Rather, they are based on intuitive or heuristic notions of wavelet neighborhood structures. Without an examination of the underlying statistics, such heuristic approaches necessarily leave unanswered questions of neighborhood sufficiency and necessity. This thesis presents an empirical study of joint wavelet statistics for textures and other imagery including dependencies across scale, space, and orientation. There is a growing realization that modeling wavelet coefficients as independent, or at best correlated only across scales, may be a poor assumption. While recent developments in wavelet-domain Hidden Markov Models (notably HMT-3S) account for within-scale dependencies, we find that wavelet spatial statistics are strongly orientation dependent, structures which are surprisingly not considered by state-of-the-art wavelet modeling techniques. To demonstrate the effectiveness of the studied wavelet correlation models a novel non-linear correlated empirical Bayesian shrinkage algorithm based on the wavelet joint statistics is proposed. In comparison with popular nonlinear shrinkage algorithms, it improves the denoising results.
APA, Harvard, Vancouver, ISO, and other styles
26

Temizel, Alptekin. "Wavelet domain image resolution enhancement methods." Thesis, University of Surrey, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ibikunle, John Olutayo. "Projection domain compression of image information." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/47614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Souare, Moussa. "Sar Image Analysis In Wavelets Domain." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405014006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Walker, Troy L. "Automating the Extraction of Domain-Specific Information from the Web-A Case Study for the Genealogical Domain." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd607.walker.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sendashonga, Mireille. "Image quality assessment using frequency domain transforms." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99537.

Full text
Abstract:
Measurement of image quality plays a central role in optimization and evaluation of imaging systems. The most straight-forward way to assess image quality is subjective evaluations by human observers, where the mean value of their scores is used as the quality measure. However, objective (quantitative) measures are needed because subjective evaluations are impractical and expensive. The aim of this thesis is to develop simple and low-complexity metrics for quality assessment of digital images.
Traditionally, the most widely used quantitative measures are the mean squared error and measures that model the human visual system. The proposed method uses the Discrete Cosine Transform and the Discrete Wavelet Transform to divide images into four frequency bands and relates the visual quality of the distorted images to the weighted average of the mean squared error between original and distorted images within each band.
The performance of the metrics presented in this thesis is tested and validated on a large database of subjective quality ratings. Simulations show that the proposed metrics accurately predict visual quality and outperform current state-of-the-art methods with simple and easily implemented processing steps.
Extensions of the proposed image quality metrics are investigated. More particularly, this thesis explores image quality assessment when the reference image is only partially available (reduced reference settings), and presents a method for successfully quantifying the quality of distorted images in such settings.
APA, Harvard, Vancouver, ISO, and other styles
31

Kohlberger, Timo. "Variational domain decomposition for parallel image processing." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985127996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Goda, Matthew. "Wavelet domain image restoration and super-resolution." Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/289808.

Full text
Abstract:
Multi-resolution techniques, and especially the wavelet transform provide unique benefits in image representation and processing not otherwise possible. While wavelet applications in image compression and denoising have become extremely prevalent, their use in image restoration and super-resolution has not been exploited to the same degree. One issue is the extension 1-D wavelet transforms into 2-D via separable transforms versus the non-separability of typical circular aperture imaging systems. This mismatch leads to performance degradations. Image restoration, the inverse problem to image formation, is the first major focus of this research. A new multi-resolution transform is presented to improve performance. The transform is called a Radially Symmetric Discrete Wavelet-like Transform (RS-DWT) and is designed based on the non-separable blurring of the typical incoherent circular aperture imaging system. The results using this transform show marked improvement compared to other restoration algorithms both in Mean Square Error and visual appearance. Extensions to the general algorithm that further improve results are discussed. The ability to super-resolve imagery using wavelet-domain techniques is the second major focus of this research. Super-resolution, the ability to reconstruct object information lost in the imaging process, has been an active research area for many years. Multiple experiments are presented which demonstrate the possibilities and problems associated with super-resolution in the wavelet-domain. Finally, super-resolution in the wavelet domain using Non-Linear Interpolative Vector Quantization is studied and the results of the algorithm are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Suen, Tsz-yin Simon. "Curvature domain stitching of digital photographs." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38800901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

ABDELWAHAB, MOATAZ MAHMOUD. "NOVEL FACIAL IMAGE RECOGNITION TECHNIQUES EMPLOYING PRINCIPAL COMPONENT ANALYSIS." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2181.

Full text
Abstract:
Recently, pattern recognition/classification has received considerable attention in diverse engineering fields such as biomedical imaging, speaker identification, fingerprint recognition, and face recognition, etc. This study contributes novel techniques for facial image recognition based on the Two dimensional principal component analysis in the transform domain. These algorithms reduce the storage requirements by an order of magnitude and the computational complexity by a factor of 2 while maintaining the excellent recognition accuracy of the recently reported methods. The proposed recognition systems employ different structures, multicriteria and multitransform. In addition, principal component analysis in the transform domain in conjunction with vector quantization is developed which result in further improvement in the recognition accuracy and dimensionality reduction. Experimental results confirm the excellent properties of the proposed algorithms.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Sangkeun. "Video analysis and abstraction in the compressed domain." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180041/unrestricted/lee%5fsangkeun%5f200312%5fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Eriksson, Morris, and Hannes Karlsson. "Evaluation of deep-learning image reconstruction for photon-counting spectral CT : A comparison between image domain- and projection domain-denoising." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297550.

Full text
Abstract:
A promising new technology in medical imaging is photon-counting detectors (PCD). Itcould allow for images with higher resolution, less noise, improved material decomposi-tion while possibly reducing radiation exposure for patients. Recently, the possibility touse deep-learning denoising in tandem with PCD to increase image quality is starting tobe investigated. In this report we use a variety of standard image quality metrics suchas MSE, SSIM and MTF, on different image phantoms, to evaluate two ways of imple-menting neural networks in the reconstruction process: in the image domain and in thesinogram domain. We show that implementing the network in the image domain seemsto be the most promising choice to increase image quality, observing higher contrast,reduced noise and smaller errors than for the sinogram domain network. We also discusswhy this might be the case. Additionally, we study the effects of optimizing the networksand how well the neural networks generalize to types of phantoms other than the onesthey were trained on.
APA, Harvard, Vancouver, ISO, and other styles
37

Hu, Qingwen. "Image and video scalability in the compressed domain." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/mq20922.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ngadiran, Ruzelita. "Rate scalable image compression in the wavelet domain." Thesis, University of Newcastle Upon Tyne, 2012. http://hdl.handle.net/10443/1437.

Full text
Abstract:
This thesis explores image compression in the wavelet transform domain. This the- sis considers progressive compression based on bit plane coding. The rst part of the thesis investigates the scalar quantisation technique for multidimensional images such as colour and multispectral image. Embedded coders such as SPIHT and SPECK are known to be very simple and e cient algorithms for compression in the wavelet do- main. However, these algorithms require the use of lists to keep track of partitioning processes, and such lists involve high memory requirement during the encoding process. A listless approach has been proposed for multispectral image compression in order to reduce the working memory required. The earlier listless coders are extended into three dimensional coder so that redundancy in the spectral domain can be exploited. Listless implementation requires a xed memory of 4 bits per pixel to represent the state of each transformed coe cient. The state is updated during coding based on test of sig- ni cance. Spectral redundancies are exploited to improve the performance of the coder by modifying its scanning rules and the initial marker/state. For colour images, this is done by conducting a joint the signi cant test for the chrominance planes. In this way, the similarities between the chrominance planes can be exploited during the cod- ing process. Fixed memory listless methods that exploit spectral redundancies enable e cient coding while maintaining rate scalability and progressive transmission. The second part of the thesis addresses image compression using directional filters in the wavelet domain. A directional lter is expected to improve the retention of edge and curve information during compression. Current implementations of hybrid wavelet and directional (HWD) lters improve the contour representation of compressed images, but su er from the pseudo-Gibbs phenomenon in the smooth regions of the images. A di erent approach to directional lters in the wavelet transforms is proposed to remove such artifacts while maintaining the ability to preserve contours and texture. Imple- mentation with grayscale images shows improvements in terms of distortion rates and the structural similarity, especially in images with contours. The proposed transform manages to preserve the directional capability without pseudo-Gibbs artifacts and at the same time reduces the complexity of wavelet transform with directional lter. Fur-ther investigation to colour images shows the transform able to preserve texture and curve.
APA, Harvard, Vancouver, ISO, and other styles
39

Dietze, Martin. "Second generation image watermarking in the wavelet domain." Thesis, University of Buckingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pagliari, Carla Liberal. "Perspective-view image matching in the DCT domain." Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Armstrong, Andrew. "Image indexing and retrieval in the compressed domain." Thesis, University of South Wales, 2003. https://pure.southwales.ac.uk/en/studentthesis/image-indexing-and-retrieval-in-the-compressed-domain(3601fe0d-cb77-4031-855b-a9e9125ec211).html.

Full text
Abstract:
This thesis is focused on low computational cost algorithms to facilitate the automatic indexing and retrieval of digital images. Several techniques are proposed, which can be classified into three distinct stages of application; In the first stage, a novel approach is proposed to provide a foundation for using genetic algorithms in imaging applications. The approach allows itself to be applied readily to any block based quantisation problem, including that of JPEG DCT domain information. The second stage tackles the main problem investigated, that of how to automatically index large numbers of images reliably and provide a mechanism to retrieve those images. Several methods are proposed, which apply various indexing techniques to JPEG images by the extraction of keys directly from the DCT domain. Thus avoiding the computationally costly de-compression stage required when applying pixel based techniques and serves the creation of index keys from partially-decoded data through manipulation of DCT coefficients - to increase speed and improve processing costs. The use of neural and genetic techniques to extract and organise DCT coefficients renders de-compression unnecessary. The third stage focuses on image security. Proposed techniques are developed and analysed to allow for the inclusion of watermarking and other data. Using these techniques, a substantial proportion of non-image data can be stored within the image invisibly at little or no extra cost. These allow additional security to be imposed, coupling these with the indexing methods; image tracking and change control can be implemented with very little overhead. All the work presented lends itself easily to a number of domains including web databases, general storage, diagnosis and secure image tracking. More importantly, extensive experimentation and analysis prove that the proposed techniques are faster than their counterparts with no loss in the quality of results generated.
APA, Harvard, Vancouver, ISO, and other styles
42

Liao, Zhiwu. "Image denoising using wavelet domain hidden Markov models." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Shoushun. "Time domain CMOS image sensor : from photodetection to on-chip image processing /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20CHEN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ackerman, Wesley. "Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8684.

Full text
Abstract:
We expand the scope of image-to-image translation to include more distinct image domains, where the image sets have analogous structures, but may not share object types between them. Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains (SUNIT) is built to more successfully translate images in this setting, where content from one domain is not found in the other. Our method trains an image translation model by learning encodings for semantic segmentations of images. These segmentations are translated between image domains to learn meaningful mappings between the structures in the two domains. The translated segmentations are then used as the basis for image generation. Beginning image generation with encoded segmentation information helps maintain the original structure of the image. We qualitatively and quantitatively show that SUNIT improves image translation outcomes, especially for image translation tasks where the image domains are very distinct.
APA, Harvard, Vancouver, ISO, and other styles
45

Al-Suwailem, Umar A. "Continuous spatial domain image identification and restoration with multichannel applications /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9737865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Fisher, Matthew Jackson. "Parametric Optimization Design System for a Fluid Domain Assembly." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2373.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Suen, Tsz-yin Simon, and 孫子彥. "Curvature domain stitching of digital photographs." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38800901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

McBride, Reo H. "Toward a Domain Theory of Fluent Oral Reading with Expression." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd1150.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Gokozan, Tolga. "Template Based Image Watermarking In The Fractional Fourier Domain." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605837/index.pdf.

Full text
Abstract:
One of the main features of digital technology is that the digital media can be duplicated and reproduced easily. However, this allows unauthorized and illegal use of information, i.e. data piracy. To protect digital media against illegal attempts a signal, called watermark, is embedded into the multimedia data in a robust and invisible manner. A watermark is a short sequence of information, which contains owner&rsquo
s identity. It is used for evidence of ownership and copyright purposes. In this thesis, we use fractional Fourier transformation (FrFT) domain, which combines space and spatial frequency domains, for watermark embedding and implement well-known secure spread spectrum watermarking approach. However, the spread spectrum watermarking scheme is fragile against geometrical attacks such as rotation and scaling. To gain robustness against geometrical attacks, an invisible template is inserted into the watermarked image in Fourier transformation domain. The template contains no information in itself but it is used to detect the transformations undergone by the image. Once the template is detected, these transformations are inverted and the watermark signal is decoded. Watermark embedding is performed by considering the masking characteristics of the Human Visual System, to ensure the watermark invisibility. In addition, we implement watermarking algorithms, which use different transformation domains such as discrete cosine transformation domain, discrete Fourier transformation domain and discrete wavelet transformation domain for watermark embedding. The performance of these algorithms and the FrFT domain watermarking scheme is experimented against various attacks and distortions, and their robustness are compared.
APA, Harvard, Vancouver, ISO, and other styles
50

Voulgaris, Georgios. "Techniques for content-based image characterization in wavelets domain." Thesis, University of South Wales, 2008. https://pure.southwales.ac.uk/en/studentthesis/techniques-for-contentbased-image-characterization-in-wavelets-domain(14c72275-a91e-4ba7-ada8-bdaee55de194).html.

Full text
Abstract:
This thesis documents the research which has led to the design of a number of techniques aiming to improve the performance of content-based image retrieval (CBIR) systems in wavelets domain using texture analysis. Attention was drawn on CBIR in transform domain and in particular wavelets because of the excellent characteristics for compression and texture extraction applications and the wide adoption in both the research community and the industry. The issue of performance is addressed in terms of accuracy and speed. The rationale for this research builds upon the conclusion that CBIR has not yet reached a good performance balance of accuracy, efficiency and speed for wide adoption in practical applications. The issue of bridging the sensory gap, which is defined as "[the difference] between the object in the real world and the information in a (computational) description derived from a recording of that scene." has yet to be resolved. Furthermore, speed improvement remains an uncharted territory as is feature extraction directly from the bitstream of compressed images. To address the above requirements the first part of this work introduces three techniques designed to jointly address the issue of accuracy and processing cost of texture characterization in wavelets domain. The second part introduces a new model for mapping the wavelet coefficients of an orthogonal wavelet transformation to a circular locus. The model is applied in order to design a novel rotation-invariant texture descriptor. All of the aforementioned techniques are also designed to bridge the gap between texture-based image retrieval and image compression by using appropriate compatible design parameters. The final part introduces three techniques for improving the speed of a CBIR query through more efficient calculation of the Li-distance, when it is used as an image similarity metric. The contributions conclude with a novel technique which, in conjunction with a widely adopted wavelet-based compression algorithm, extracts texture information directly from the compressed bit-stream for speed and storage requirements savings. The experimental findings indicate that the proposed techniques form a solid groundwork which can be extended to practical applications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography