Segui questo link per vedere altri tipi di pubblicazioni sul tema: Image processin.

Tesi sul tema "Image processin"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Image processin".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Bergström, Britt, e Erica Burlin. "Bildens godkännandeprocessi katalogproduktion : The image approval processin a catalog production". Thesis, Högskolan Dalarna, Grafisk teknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:du-4210.

Testo completo
Abstract (sommario):
The objective of this thesis has been to investigate the approval process for an image. This investigation has been carriedout at four catalog-producing companies and three companies working with repro or printing. The information wasgathered through interviews and surveys and later used for evaluation. The result of the evaluation has shown that allbusinesses are very good at technical aspects but also that the biggest problem they have is with the communication. Theconclusion is that businesses need a clear construction for the image process. This will minimize the communicationproblems and make the process effective.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Murphy, Brian P. "Image processing techniques for acoustic images". Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26585.

Testo completo
Abstract (sommario):
Approved for public release; distribution is unlimited
The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge Detection and Segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Khan, Preoyati. "Cluster Based Image Processing for ImageJ". Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1492164847520322.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yallop, Marc Richard. "Image processing techniques for passive millimetre wave images". Thesis, University of Reading, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409545.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Baabd, A., M. Y. Tymkovich e О. Г. Аврунін. "Image Processing of Panoramic Dental X-Ray Images". Thesis, ХГУ, 2018. http://openarchive.nure.ua/handle/document/6204.

Testo completo
Abstract (sommario):
The panoramic image allows to clearly see the state of the teeth, the dental rudiments, which are located in the jaw, temporomandibular joints, as well as the maxillary sinuses. It is noted that this type of study has a small dose of radiation. Indications for this type of study are dental implantation, bite correction, suspicion of bone tissue inflammation, control of the growth and development of the teeth, as well as the diagnosis of other dental problems.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

AbouRayan, Mohamed. "Real-time Image Fusion Processing for Astronomical Images". University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461449811.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Kim, Younhee. "Towards lower bounds on distortion in information hiding". Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3403.

Testo completo
Abstract (sommario):
Thesis (Ph.D.)--George Mason University, 2008.
Vita: p. 133. Thesis directors: Zoran Duric, Dana Richards. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science. Title from PDF t.p. (viewed Mar. 17, 2009). Includes bibliographical references (p. 127-132). Also issued in print.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Tummala, Sai Virali, e Veerendra Marni. "Comparison of Image Compression and Enhancement Techniques for Image Quality in Medical Images". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15360.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Acosta, Edward Kelly. "A programmable processor for the Cheops image processing system". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36557.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Abdulla, Ghaleb. "An image processing tool for cropping and enhancing images". Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-12232009-020207/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Musoke, David. "Digital image processing with the Motorola 56001 digital signal processor". Scholarly Commons, 1992. https://scholarlycommons.pacific.edu/uop_etds/2236.

Testo completo
Abstract (sommario):
This report describes the design and testing of the Image56 system, an IBM-AT based system which consists of an analog video board and a digital board. The former contains all analog and video support circuitry to perform real-time image processing functions. The latter is responsible for performing non real-time, complex image processing tasks using a Motorola DSP56001 digital signal processor. It is supported by eight image data buffers and 512K words of DSP memory (see Appendix A for schematic diagram).
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Launay, Claire. "Discrete determinantal point processes and their application to image processing". Thesis, Université de Paris (2019-....), 2020. http://www.theses.fr/2020UNIP7034.

Testo completo
Abstract (sommario):
Les processus ponctuels déterminantaux (Determinantal Point Processes ou DPP en anglais) sont des modèles probabilistes qui modélisent les corrélations négatives ou la répulsion à l'intérieur d'un ensemble d'éléments. Ils ont tendance à générer des sous-ensembles d'éléments diversifiés ou éloignés les uns des autres. Cette notion de similarité ou de proximité entre les points de l'ensemble est définie et conservée dans le noyau associé à chaque DPP. Cette thèse étudie ces modèles dans un cadre discret, définis dans un ensemble discret et fini d'éléments. Nous nous sommes intéressés à leur application à des questions de traitement d'images, lorsque l'ensemble de points de départ correspond aux pixels ou aux patchs d'une image. Les Chapitres 1 et 2 introduisent les processus ponctuels déterminantaux dans un cadre discret général, leurs propriétés principales et les algorithmes régulièrement utilisés pour les échantillonner, c'est-à-dire pour sélectionner un sous-ensemble de points distribué selon le DPP choisi. Dans ce cadre, le noyau d'un DPP est une matrice. L'algorithme le plus utilisé est un algorithme spectral qui repose sur le calcul des valeurs propres et des vecteurs propres du noyau du DPP. Dans le Chapitre 2, nous présentons un algorithme d'échantillonnage qui repose sur une procédure de thinning (ou amincissement) et sur une décomposition de Cholesky mais qui n'a pas besoin de la décomposition spectrale du noyau. Cet algorithme est exact et, sous certaines conditions, compétitif avec l'algorithme spectral. Le Chapitre 3 présente les DPP définis sur l'ensemble des pixels d'une image, appelés processus pixelliques déterminantaux (Determinantal Pixel Processes ou DPixP en anglais). Ce nouveau cadre impose des hypothèses de périodicité et de stationnarité qui ont des conséquences sur le noyau du processus et sur les propriétés de répulsion générée par ce noyau. Nous étudions aussi ce modèle appliqué à la synthèse de textures gaussiennes, grâce à l'utilisation de modèles shot noise. Nous nous intéressons également à l'estimation du noyau de DPixP à partir d'un ou plusieurs échantillons. Le Chapitre 4 explore les processus ponctuels déterminantaux définis sur l'ensemble des patchs d'une image, c'est-à-dire la famille des sous-images carrées d'une taille donnée dans une image. L'objectif est de sélectionner une proportion de ces patchs, suffisamment diversifiée pour être représentative de l'information contenue dans l'image. Une telle sélection peut permettre d'accélérer certains algorithmes de traitements d'images basés sur les patchs, voire d'améliorer la qualité d'algorithmes existants ayant besoin d'un sous-échantillonnage des patchs. Nous présentons une application de cette question à un algorithme de synthèse de textures
Determinantal point processes (DPPs in short) are probabilistic models that capture negative correlations or repulsion within a set of elements. They tend to generate diverse or distant subsets of elements. This notion of similarity or proximity between elements is defined and stored in the kernel associated with each DPP. This thesis studies these models in a discrete framework, defined on a discrete and finite set of elements. We are interested in their application to image processing, when the initial set of points corresponds to the pixels or the patches of an image. Chapter 1 and 2 introduce determinantal point processes in a general discrete framework, their main properties and the algorithms usually used to sample them, i.e. used to select a subset of points distributed according to the chosen DPP. In this framework, the kernel of a DPP is a matrix. The main algorithm is a spectral algorithm based on the computation of the eigenvalues and the eigenvectors of the DPP kernel. In Chapter 2, we present a sampling algorithm based on a thinning procedure and a Cholesky decomposition but which does not require the spectral decomposition of the kernel. This algorithm is exact and, under certain conditions, competitive with the spectral algorithm. Chapter 3 studies DPPs defined over all the pixels of an image, called Determinantal Pixel Processes (DPixPs). This new framework imposes periodicity and stationarity assumptions that have consequences on the kernel of the process and on properties of the repulsion generated by this kernel. We study this model applied to Gaussian textures synthesis, using shot noise models. In this chapter, we are also interested in the estimation of the DPixP kernel from one or several samples. Chapter 4 explores DPPs defined on the set of patches of an image, that is the family of small square images contained in the image. The aim is to select a proportion of these patches, diverse enough to be representative of the information contained in the image. Such a selection can speed up certain patch-based image processing algorithms, or even improve the quality of existing algorithms that require patch subsampling. We present an application of this question to a texture synthesis algorithm
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Pedron, Ilario. "Digital image processing for cancer cell finding using color images". Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61720.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Ahtaiba, Ahmed Mohamed A. "Restoration of AFM images using digital signal and image processing". Thesis, Liverpool John Moores University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604322.

Testo completo
Abstract (sommario):
All atomic force microscope (AFM) images suffer from distortions, which are principally produced by the interaction between the measured sample and the AFM tip. If the three-dimensional shape of the tip is known, the distorted image can be processed and the original surface form ' restored' typically by deconvolution approaches. This restored image gives a better representation of the real 3D surface or the measured sample than the original distorted image. In this thesis, a quantitative investigation of using morphological deconvolution has been used to restore AFM images via computer simulation using various computer simulated tips and objects. This thesis also presents the systematic quantitative study of the blind tip estimation algorithm via computer simulation using various computer simulated tips and objects. This thesis proposes a new method for estimating the impulse response of the AFM by measuring a micro-cylinder with a-priori known dimensions using contact mode AFM. The estimated impulse response is then used to restore subsequent AFM images, when measured with the same tip, under similar measurement conditions. Significantly, an approximation to what corresponds to the impulse response of the AFM can be deduced using this method. The suitability of this novel approach for restoring AFM images has been confirmed using both computer simulation and also with real experimental AFM images. This thesis suggests another new approach (impulse response technique) to estimate the impulse response of the AFM. this time from a square pillar sample that is measured using contact mode AFM. Once the impulse response is known, a deconvolution process is carried out between the estimated impulse response and typical 'distorted' raw AFM images in order to reduce the distortion effects. The experimental results and the computer simulations validate the performance of the proposed approach, in which it illustrates that the AFM image accuracy has been significantly improved. A new approach has been implemented in this research programme for the restoration of AFM images enabling a combination of cantilever and feedback signals at different scanning speeds. In this approach, the AFM topographic image is constructed using values obtained by summing the height image that is used for driving the Z-scanner and the deflection image with a weight function oc that is close to 3. The value of oc has been determined experimentally using tri al and error. This method has been tested 3t ten different scanning speeds and it consistently gives more faithful topographic images than the original AFM images.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Wear, Steven M. "Shift-invariant image reconstruction of speckle-degraded images using bispectrum estimation /". Online version of thesis, 1990. http://hdl.handle.net/1850/11219.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Kariani, H. "Review of Modern Frameworks for Microscopy Image Processing". Thesis, Ukraine, Kharkiv, 2021. https://openarchive.nure.ua/handle/document/16613.

Testo completo
Abstract (sommario):
Modern research in microscopy image processing requires a deeper understanding of the influence of different factors on registration of this type of biomedical images. Analysis of this process requires smart software which should be able to obtain quantitative parameters of micro objectives with acceptable processing speed.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Sahandi, Reza. "Image Processing". Thesis, University of Bradford, 1987. http://eprints.bournemouth.ac.uk/9884/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Soares, Andre Borin. "Exploração do paralelismo em arquiteturas para processamento de imagens e vídeo". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/10539.

Testo completo
Abstract (sommario):
O processamento de vídeo e imagens é uma área de pesquisa de grande importância atualmente devido ao incremento de utilização de imagens nas mais variadas áreas de atividades: entretenimento, vigilância, supervisão e controle, medicina, e outras. Os algoritmos utilizados para reconhecimento, compressão, descompressão, filtragem, restauração e melhoramento de imagens apresentam freqüentemente uma demanda computacional superior àquela que os processadores convencionais podem oferecer, exigindo muitas vezes o desenvolvimento de arquiteturas dedicadas. Este documento descreve o trabalho realizado na exploração do espaço de projeto de arquiteturas para processamento de imagem e de vídeo, utilizando processamento paralelo. Várias características particulares deste tipo de arquitetura são apontadas. Uma nova técnica é apresentada, na qual Processadores Elementares (P.E.s) especializados trabalham de forma cooperativa sobre uma estrutura de comunicação em rede intra-chip
Nowadays video and image processing is a very important research area, because of its widespread use in a broad class of applications like entertainment, surveillance, control, medicine and many others. Some of the used algorithms to perform recognition, compression, decompression, filtering, restoration and enhancement of the images, require a computational power higher than the one available in conventional processors, requiring the development of dedicated architectures. This document presents the work developed in the design space exploration in the field of video and image processing architectures by the use of parallel processing. Many characteristics of this kind of architecture are pointed out. A novel technique is presented in which customized Processing Elements work in a cooperative way over a communication structure using a network on chip.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Louridas, Efstathios. "Image processing and analysis of videofluoroscopy images in cleft palate patients". Thesis, University of Kent, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267392.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Bandyopadhyay, Abhishek. "Matrix transform imager architecture for on-chip low-power image processing". Diss., Available online, Georgia Institute of Technology, 2004:, 2004. http://etd.gatech.edu/theses/available/etd-08192004-133909/unrestricted/bandyopadhyay%5Fabhishek%5F200412%5Fphd.pdf.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2005.
Smith, Mark, Committee Member ; DeWeerth, Steve, Committee Member ; Jackson, Joel, Committee Member ; David Anderson, Committee Member ; Hasler, Paul, Committee Chair. Includes bibliographical references.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Hillmer, Dirk. "Computer-based analysis of Biological Images Neuronal Networks for Image Processing". Electronic Thesis or Diss., Bordeaux, 2024. https://theses.hal.science/tel-04650911.

Testo completo
Abstract (sommario):
L’IA en médecine est un domaine en croissance rapide et son importance en dermatologie est de plus en plus prononcée. Les progrès des réseaux neuronaux, accélérés par de puissants GPU, ont catalysé le développement de systèmes d’IA pour l’analyse des troubles cutanés. Cette étude présente une nouvelle approche qui exploite les techniques d’infographie pour créer des réseaux d’IA adaptés aux troubles cutanés. La synergie de ces techniques génère non seulement des données de formation, mais optimise également la manipulation des images pour un traitement amélioré. Le vitiligo, un trouble cutané dépigmentant courant, constitue une étude de cas poignante. L’évolution des thérapies ciblées souligne la nécessité d’une évaluation précise de la surface touchée. Cependant, les méthodes d’évaluation traditionnelles prennent beaucoup de temps et sont sujettes à une variabilité inter-évaluateur et intra-évaluateur. En réponse, cette recherche vise à construire un système d'intelligence artificielle (IA) capable de quantifier objectivement la gravité du vitiligo facial. La formation et la validation du modèle d'IA ont exploité un ensemble de données d'une centaine d'images de vitiligo facial. Par la suite, un ensemble de données indépendant de soixante-neuf images de vitiligo facial a été utilisé pour l’évaluation finale. Les scores attribués par trois médecins experts ont été comparés aux performances inter-évaluateurs et intra-évaluateurs, ainsi qu'aux évaluations de l'IA. De manière impressionnante, le modèle d’IA a atteint une précision remarquable de 93 %, démontrant son efficacité dans la quantification de la gravité du vitiligo facial. Les résultats ont mis en évidence une concordance substantielle entre les scores générés par l'IA et ceux fournis par les évaluateurs humains. Au-delà du vitiligo facial, l'utilité de ce modèle dans l'analyse des images du corps entier et des images sous différents angles est apparue comme une voie d'exploration prometteuse. L'intégration de ces images dans une représentation complète pourrait offrir un aperçu de la progression du vitiligo au fil du temps, améliorant ainsi le diagnostic clinique et les résultats de la recherche. Bien que le voyage ait été fructueux, certains aspects de la recherche se sont heurtés à des obstacles en raison de ressources insuffisantes en images et en données. Une exploration de l'analyse de modèles de souris in vivo et de l'analyse de la pigmentation des cellules de la peau dans des modèles d'embryons précliniques ainsi que de la reconnaissance d'images de la rétine a malheureusement été interrompue. Néanmoins, ces défis mettent en lumière la nature dynamique de la recherche et soulignent l’importance de l’adaptabilité pour surmonter les obstacles imprévus.En conclusion, cette étude met en valeur le potentiel de l’IA pour révolutionner l’évaluation dermatologique. En fournissant une évaluation objective de la gravité du vitiligo facial, le modèle d’IA proposé constitue un complément précieux à l’évaluation humaine, tant dans la pratique clinique que dans la recherche. La poursuite continue de l’intégration de l’IA dans l’analyse de divers ensembles de données d’images est prometteuse pour des applications plus larges en dermatologie et au-delà
AI in medicine is a rapidly growing field, and its significance in dermatology is increasingly pronounced. Advancements in neural networks, accelerated by powerful GPUs, have catalyzed the development of AI systems for skin disorder analysis. This study presents a novel approach that harnesses computer graphics techniques to create AI networks tailored to skin disorders. The synergy of these techniques not only generates training data but also optimizes image manipulation for enhanced processing. Vitiligo, a common depigmenting skin disorder, serves as a poignant case study. The evolution of targeted therapies underscores the necessity for precise assessment of the affected surface area. However, traditional evaluation methods are time-intensive and prone to inter- and intra-rater variability. In response, this research endeavors to construct an artificial intelligence (AI) system capable of objectively quantifying facial vitiligo severity.The AI model's training and validation leveraged a dataset of one hundred facial vitiligo images. Subsequently, an independent dataset of sixty-nine facial vitiligo images was used for final evaluation. The scores assigned by three expert physicians were compared with both inter- and intra-rater performances, as well as the AI's assessments. Impressively, the AI model achieved a remarkable accuracy of 93%, demonstrating its efficacy in quantifying facial vitiligo severity. The outcomes highlighted substantial concordance between AI-generated scores and those provided by human raters.Expanding beyond facial vitiligo, this model's utility in analyzing full-body images and images from various angles emerged as a promising avenue for exploration. Integrating these images into a comprehensive representation could offer insights into vitiligo's progression over time, thereby enhancing clinical diagnosis and research outcomes. While the journey has been fruitful, certain aspects of the research encountered roadblocks due to insufficient image and data resources. An exploration into analysis of in vivo mouse models and analysing pigmentation of skin cells in a preclinical embryo models as well as retina image recognition was regrettably halted. Nevertheless, these challenges illuminate the dynamic nature of research and underscore the importance of adaptability in navigating unforeseen obstacles.In conclusion, this study showcases the potential of AI to revolutionize dermatological assessment. By providing an objective evaluation of facial vitiligo severity, the proposed AI model offers a valuable adjunct to human assessment in both clinical practice and research settings. The ongoing pursuit of integrating AI into the analysis of diverse image datasets holds promise for broader applications in dermatology and beyond
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Munechika, Curtis K. "Merging panchromatic and multispectral images for enhanced image analysis /". Online version of thesis, 1990. http://hdl.handle.net/1850/11366.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

ROS, RENATO A. "Fusão de imagens médicas para aplicação de sistemas de planejamento de tratamento em radioterapia". reponame:Repositório Institucional do IPEN, 2006. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11417.

Testo completo
Abstract (sommario):
Made available in DSpace on 2014-10-09T12:51:40Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:10:00Z (GMT). No. of bitstreams: 0
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Karelid, Mikael. "Image Enhancement over a Sequence of Images". Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12523.

Testo completo
Abstract (sommario):

This Master Thesis has been conducted at the National Laboratory of Forensic Science (SKL) in Linköping. When images that are to be analyzed at SKL, presenting an interesting object, are of bad quality there may be a need to enhance them. If several images with the object are available, the total amount of information can be used in order to estimate one single enhanced image. A program to do this has been developed by studying methods for image registration and high resolution image estimation. Tests of important parts of the procedure have been conducted. The final results are satisfying and the key to a good high resolution image seems to be the precision of the image registration. Improvements of this part may lead to even better results. More suggestions for further improvementshave been proposed.


Detta examensarbete har utförts på uppdrag av Statens Kriminaltekniska Laboratorium (SKL) i Linköping. Då bilder av ett intressant objekt som ska analyseras på SKL ibland är av dålig kvalitet finns det behov av att förbättra dessa. Om ett flertal bilder på objektet finns tillgängliga kan den totala informationen fråndessa användas för att skatta en enda förbättrad bild. Ett program för att göra detta har utvecklats genom studier av metoder för bildregistrering och skapande av högupplöst bild. Tester av viktiga delar i proceduren har genomförts. De slutgiltiga resultaten är goda och nyckeln till en bra högupplöst bild verkar ligga i precisionen för bildregistreringen. Genom att förbättra denna del kan troligtvis ännu bättre resultat fås. Även andra förslag till förbättringar har lagts fram.

Gli stili APA, Harvard, Vancouver, ISO e altri
25

Akremi, Mohamed. "Manifold-Based Approaches for Action and Gesture Recognition". Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPAST045.

Testo completo
Abstract (sommario):
La reconnaissance des actions humaines (HAR) est devenue un domaine de recherche essentiel en raison de ses nombreuses applications dans le monde réel, notamment l'interaction homme-machine, la santé intelligente, la réalité virtuelle, la surveillance, le contrôle des drones (UAV) et les systèmes autonomes. Au cours des dernières décennies, de nombreuses approches ont été développées pour reconnaître les actions humaines à partir de séquences vidéo RGB monoculaires. Plus récemment, l'émergence des capteurs de profondeur a favorisé le développement de l'analyse des activités en 3D et de la reconnaissance des gestes en 3D, entraînant des avancées significatives dans le domaine. Parmi les différentes techniques proposées, les approches basées sur les variétés ont gagné en importance en raison de leur capacité à modéliser efficacement l'évolution temporelle des données squelettiques 3D grâce à des représentations invariantes aux variétés. Ces méthodes ont démontré des performances remarquables dans la résolution du défi de la reconnaissance des actions.Dans ce travail, nous explorons les propriétés de la variété des matrices Symmetric Positive Definite (SPD), l'une des plus utilisées en reconnaissance des actions et des gestes. Nous proposons un cadre de reconnaissance innovant intégrant un détecteur et un classificateur, en exploitant un réseau de neurones basé sur SPD, connu sous le nom de SPD Siamese Neural Network. Nous validons ses performances par le biais d'expériences approfondies sur des séquences d'actions segmentées et continues à travers plusieurs ensembles de données. Nos résultats montrent que cette approche surpasse les méthodes de l'état de l'art dans divers scénarios.Malgré ces avancées, des défis majeurs subsistent, en particulier dans des environnements complexes tels que la reconnaissance des actions humaines par drone (UAV). Pour pallier ces limitations, nous introduisons un modèle amélioré, SPDAGG-TransNet, qui optimise le réseau SPD Siamese en affinant l'extraction des caractéristiques spatio-temporelles et en intégrant un module Transformer. Cette amélioration renforce la capacité du modèle à capturer les dépendances à long terme, enrichir les représentations des caractéristiques et préserver les propriétés géométriques intrinsèques des représentations SPD. L'intégration d'encodeurs Transformer améliore encore la précision de la reconnaissance en modélisant efficacement les dynamiques locales et globales du mouvement. Des évaluations approfondies sur des ensembles de données de référence, notamment DHG-14, UAV-Human et UAV-Gesture, démontrent que SPDAGG-TransNet atteint des performances de pointe.Au-delà des approches basées sur SPD, nous explorons également l'espace hyperbolique comme cadre géométrique alternatif pour la reconnaissance des mouvements. Les réseaux de neurones hyperboliques (HNNs) constituent une voie prometteuse pour modéliser les relations hiérarchiques et structurées des données de mouvement. Contrairement aux modèles d'apprentissage profond conventionnels basés sur l'espace euclidien, les architectures hyperboliques exploitent les transformations de Lorentz et des techniques d'optimisation avancées, telles que Riemannian Adam optimizer, pour stabiliser les embeddings et améliorer l'évolutivité. Ces avancées permettent une modélisation plus efficace des mouvements hiérarchiques, rendant l'apprentissage hyperbolique particulièrement adapté aux tâches de reconnaissance des actions.Des expériences approfondies sur plusieurs ensembles de données, de la reconnaissance des gestes de la main aux actions du corps et aux données UAV, confirment l'efficacité des approches basées sur SPD et l'espace hyperbolique dans des scénarios complexes. Nos résultats soulignent la supériorité des cadres d'apprentissage géométrique pour modéliser avec précision les mouvements humains, garantir une adaptabilité en temps réel et dépasser les limites des méthodes euclidiennes traditionnelles
Human action recognition (HAR) has emerged as a critical research area due to its wide range of real-world applications, including human-computer interaction, intelligent healthcare, virtual reality, surveillance, UAV control, and autonomous systems. Over the past few decades, numerous approaches have been developed to recognize human actions from monocular RGB video sequences. More recently, the advent of depth sensors has fueled the growth of 3D activity analysis and 3D gesture recognition, leading to significant advancements in the field. Among the various techniques proposed, manifold-based approaches have gained prominence due to their ability to effectively model the temporal evolution of 3D skeletal data through manifold-invariant representations. These methods have demonstrated remarkable performance in addressing the challenging task of action recognition.In this work, we explore the properties of the Symmetric Positive Definite (SPD) manifold, one of the most widely used manifolds in action and gesture recognition. We propose a novel recognition framework that integrates both a detector and a classifier, leveraging an SPD-based neural network known as the SPD Siamese Neural Network. We validate its performance through extensive experiments on both segmented and continuous action sequences across multiple datasets. Our results demonstrate that this approach outperforms state-of-the-art methods in various scenarios.Despite these advancements, significant challenges persist, particularly in complex environments such as UAV-based human action recognition. To overcome these limitations, we introduce an improved model, SPDAGG-TransNet, which enhances the baseline SPD Siamese network by refining its temporal-spatial feature extraction and integrating a Transformer module. This enhancement strengthens the model's ability to capture long-range dependencies, enrich feature representations, and maintain the intrinsic geometric properties of SPD representations. By incorporating Transformer encoders, our approach further improves recognition accuracy by effectively modeling both local and global motion dynamics. Extensive evaluations on benchmark datasets, including DHG-14, UAV-Human, and UAV-Gesture, demonstrate that SPDAGG-TransNet achieves state-of-the-art performance.Beyond SPD-based approaches, we also explore hyperbolic space as an alternative geometric framework for motion recognition. Hyperbolic neural networks (HNNs) offer a promising direction for modeling hierarchical and structured relationships in motion data. Unlike conventional Euclidean-based deep learning models, hyperbolic architectures leverage Lorentz transformations and novel optimization techniques, such as the Riemannian Adam optimizer, to stabilize embeddings and enhance scalability. These advancements enable more effective hierarchical motion modeling, making hyperbolic learning particularly suitable for action recognition tasks.Extensive experiments conducted across multiple benchmarks—ranging from hand gesture recognition to full-body action recognition and UAV-based datasets demonstrate the effectiveness of both SPD-based and hyperbolic-based approaches in challenging scenarios. Our findings highlight the superiority of geometric learning frameworks in accurately modeling human motion, ensuring real-time adaptability, and overcoming the limitations of traditional Euclidean methods
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Roudot, Philippe. "Image processing methods for dynamical intracellular processes analysis in quantitative fluorescence microscopy". Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S025/document.

Testo completo
Abstract (sommario):
Nous présentons dans la première partie du document une étude portant sur l'imagerie de temps de vie de fluorescence sur structures dynamiques dans le domaine de fréquence (FD FLIM). Une mesure en FD FLIM est définie par une série d'images présentant une variation d'intensité sinusoïdale. La variation d'un temps de vie se traduit par une variation dans la phase de la sinusoïde décrite par l'intensité. Notre étude comporte deux contributions principales: une modélisation du processus de formation de l'image et du bruit inhérent au système d'acquisition (capteur ICCD) ; une méthode robuste d'estimation du temps vie sur des structures mobiles et des vésicules intracellulaires. Nous présentons ensuite une étude en microscopie de fluorescence portant sur la quantification du transport hétérogène dans un environnement intracellulaire dense. Les transitions entre la diffusion Brownienne dans le cytoplasme et les transports actifs supportés par le cytosquelette sont en effet des scénarios très couramment observés dans des cellules vivantes. Nous montrons que les algorithmes classiques de suivi d'objets nécessaires dans ce contexte, ne sont pas conçus pour détecter les transitions entre ces deux types de mouvement. Nous proposons donc un nouvel algorithme, inspiré de l'algorithme u-track [Jaqaman et al., 2008], qui s'appuie sur plusieurs filtrages de Kalman adaptés à différents types de transport (Brownien, Dirigé ...), indépendamment pour chaque objet suivi. Nous illustrons sur séquences simulées et expérimentales (vimentine, virus) l'aptitude de notre algorithme à détecter des mouvements dirigés rares
We propose in this manuscript a study of the instrumentation required for the quantification in frequency domain fluorescence lifetime imaging microscopy (FD FLIM). A FD FLIM measurement is defined as a series of images with sinusoidal intensity variations. The fluorescence lifetime is defined as the nanosecond-scale delay between excitation and emission of fluorescence. We propose two main contributions in the area: a modeling of the image process and noise introduced by the acquisition system (ICCD sensor); a robust statistical method for lifetime estimation on moving structures and intracellular vesicles. The second part presents a contribution to the tracking of multiple particles presenting heterogeneous transports in dense conditions. We focus here on the switching between confined diffusion in the cytosol and motor-mediated active transport in random directions. We show that current multiple model filtering and gating strategies fail at estimating unpredictable transitions between Brownian and directed displacements. We propose a new algorithm, based on the u-track algorithm [Jaqaman et al., 2008], based on a set of Kalman filters adapted to several motion types, for each tracked object. The algorithm has been evaluated on simulated and real data (vimentin, virus) data. We show that our method outperforms competing methods in the targeted scenario, but also on more homogeneous types of dynamics challenged by density
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Guarino, de Vasconcelos Luiz Eduardo, André Yoshimi Kusomoto e Nelson Paiva Oliveira Leite. "Using Image Processing and Pattern Recognition in Images from Head-Up Display". International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579665.

Testo completo
Abstract (sommario):
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV
Images frames have always been used as information source for the Flight Test Campaigns (FTC). During the flight tests, the images displayed on the Head-Up Display (HUD) could be stored for later analysis. HUD images presents aircraft data provided by its avionics system. For a simplified Flight Test Instrumentation (FTI), where data accuracy is not a big issue, HUD images could become the primary information source. However in this case data analysis is executed manually, frame by frame for information extraction (e.g. Aircraft position parameters: Latitude; Longitude and Altitude). In approximately one hour of flight test about 36,000 frames are generated using standard-definition television format, therefore data extraction becomes complex, time consuming and prone to failures. To improve efficiency and effectiveness for this FTC, the Instituto de Pesquisas e Ensaios em Voo (IPEV - Flight Test and Research Institute) with Instituto Tecnológico de Aeronáutica (ITA - Aeronautical Technology Institute) developed an image processing application with pattern recognition using the correlation process to extract information from different positions on the images of the HUD. Preliminary test and evaluation carried out by 2012 using HUD images of the jet fighter EMBRAER A1. The test results demonstrate satisfactory performance for this tool.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Gopalan, Sowmya. "Estimating Columnar Grain Size in Steel-Weld Images using Image Processing Techniques". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250621610.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Ben, Rabha Jamal Salh. "An image processing decisional system for the Achilles tendon using ultrasound images". Thesis, University of Salford, 2018. http://usir.salford.ac.uk/46561/.

Testo completo
Abstract (sommario):
The Achilles Tendon (AT) is described as the largest and strongest tendon in the human body. As for any other organs in the human body, the AT is associated with some medical problems that include Achilles rupture and Achilles tendonitis. AT rupture affects about 1 in 5,000 people worldwide. Additionally, AT is seen in about 10 percent of the patients involved in sports activities. Today, ultrasound imaging plays a crucial role in medical imaging technologies. It is portable, non-invasive, free of radiation risks, relatively inexpensive and capable of taking real-time images. There is a lack of research that looks into the early detection and diagnosis of AT abnormalities from ultrasound images. This motivated the researcher to build a complete system which enables one to crop, denoise, enhance, extract the important features and classify AT ultrasound images. The proposed application focuses on developing an automated system platform. Generally, systems for analysing ultrasound images involve four stages, pre-processing, segmentation, feature extraction and classification. To produce the best results for classifying the AT, SRAD, CLAHE, GLCM, GLRLM, KPCA algorithms have been used. This was followed by the use of different standard and ensemble classifiers trained and tested using the dataset samples and reduced features to categorize the AT images into normal or abnormal. Various classifiers have been adopted in this research to improve the classification accuracy. To build an image decisional system, a 57 AT ultrasound images has been collected. These images were used in three different approaches where the Region of Interest (ROI) position and size are located differently. To avoid the imbalanced misleading metrics, different evaluation metrics have been adapted to compare different classifiers and evaluate the whole classification accuracy. The classification outcomes are evaluated using different metrics in order to estimate the decisional system performance. A high accuracy of 83% was achieved during the classification process. Most of the ensemble classifies worked better than the standard classifiers in all the three ROI approaches. The research aim was achieved and accomplished by building an image processing decisional system for the AT ultrasound images. This system can distinguish between normal and abnormal AT ultrasound images. In this decisional system, AT images were improved and enhanced to achieve a high accuracy of classification without any user intervention.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Ouellet, Michel. "Image processing architectures". Thesis, University of Ottawa (Canada), 1986. http://hdl.handle.net/10393/5068.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Gardiner, Bryan. "Hexagonal image processing". Thesis, University of Ulster, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.535794.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Hu, Nan. "SECURE IMAGE PROCESSING". UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/448.

Testo completo
Abstract (sommario):
In todays heterogeneous network environment, there is a growing demand for distrusted parties to jointly execute distributed algorithms on private data whose secrecy needed to be safeguarded. Platforms that support such computation on image processing purposes are called secure image processing protocols. In this thesis, we propose a new security model, called quasi information theoretic (QIT) security. Under the proposed model efficient protocols on two basic image processing algorithms linear filtering and thresholding are developed. For both problems we consider two situations: 1) only two parties are involved where one holds the data and the other possesses the processing algorithm; 2) an additional non-colluding third party exists. Experiments show that our proposed protocols improved the computational time significantly compared with the classical cryptographical couterparts as well as providing reasonable amount of security as proved in the thesis
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Zhang, Yi. "Blur Image Processing". University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1448384360.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Das, Mohammed. "Image analysis techniques for vertebra anomaly detection in X-ray images". Diss., Rolla, Mo. : University of Missouri--Rolla i.e. [Missouri University of Science and Technology], 2008. http://scholarsmine.mst.edu/thesis/MohammedDas_Thesis_09007dcc804c3cf6.pdf.

Testo completo
Abstract (sommario):
Thesis (M.S.)--Missouri University of Science and Technology, 2008.
Degree granted by Missouri University of Science and Technology, formerly known as University of Missouri--Rolla. Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed March 24, 2008) Includes bibliographical references (p. 87-88).
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Yau, Chin-ko, e 游展高. "Super-resolution image restoration from multiple decimated, blurred and noisy images". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30292529.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Zhang, Jun. "Rendering and Image Processing for Micro Lithography on Xeon Phi Knights Landing Processor". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-241088.

Testo completo
Abstract (sommario):
The Segment program in Mycronics laser mask writers converts vector graphics into raster image with high computation intensity. IntelR Xeon PhiTM Knights Landing (KNL) is a many-core processor delivering massive thread and data parallelism. This project explores whether KNL can be a good candidate as a data processing platform in microlithography applications. The feasibility is studied through profiling the program on KNL together with comparing the performance on KNL with other architectures, including the current platform. Several optimization methods are implemented targeting KNL, resulting in speed-up up to 5%. The cost of the systems is taken into consideration. The high-level parallel application can take the advantage of the huge number of cores, leading to the high performance per cost together with the relatively low price of KNL. Hence, KNL can be a nice replacement for the current platform as a high-performance patterngenerator.
Segmentprogrammet i Mycronics laserskrivare omvandlar vektorgrafik till rasterbild med hög beräkningsintensitet. IntelR Xeon PhiTM Knights Landing (KNL) är en process med många kärnor som levererar omfattande tråd och dataparallellitet. Detta projekt undersöker om KNL kan vara en bra kandidat som databehandlingsplattform i mikrolitografiska applikationer. Genomförbarheten studeras genom att profilera programmet på KNL tillsammans med att jämföra prestanda på KNL med andra arkitekturer, inklusive den nuvarande plattformen. Flera optimeringsmetoder implementeras med inriktning på KNL, vilket resulterar i effektivitetshöjningar upp till 5 %. Kostnaden för systemen beaktas. Den högt parallelliserade applikationen kan dra fördel av det stora antalet kärnor, vilket leder till hög prestanda per kostnad tillsammans med det relativt låga priset på KNL. Därför kan KNL vara en bra ersättare för den nuvarande plattformen som en högpresteran-de mönstergenerator.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Liu, Chia-Chin. "Image quality as a function of unsharp masking band center /". Online version of thesis, 1988. http://hdl.handle.net/1850/10420.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Li, Shyi-Shyang. "Comparing the ability of subjective quality factor and information theory to predict image quality /". Online version of thesis, 1994. http://hdl.handle.net/1850/11880.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Glotfelty, Joseph Edmund. "Automatic selection of optimal window size and shape for texture analysis". Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=898.

Testo completo
Abstract (sommario):
Thesis (M.A.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains vii, 59 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 55-59).
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Kim, Kyu-Heon. "Segmentation of natural texture images using a robust stochastic image model". Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307927.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Neupane, Aashish. "Visual Saliency Analysis on Fashion Images Using Image Processing and Deep Learning Approaches". OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2784.

Testo completo
Abstract (sommario):
ABSTRACTAASHISH NEUPANE, for the Master of Science degree in BIOMEDICAL ENGINEERING, presented on July 35, 2020, at Southern Illinois University Carbondale. TITLE: VISUAL SALIENCY ANALYSIS ON FASHION IMAGES USING IMAGE PROCESSING AND DEEP LEARNING APPROACHES.MAJOR PROFESSOR: Dr. Jun QinState-of-art computer vision technologies have been applied in fashion in multiple ways, and saliency modeling is one of those applications. In computer vision, a saliency map is a 2D topological map which indicates the probabilistic distribution of visual attention priorities. This study is focusing on analysis of the visual saliency on fashion images using multiple saliency models, evaluated by several evaluation metrics. A human subject study has been conducted to collect people’s visual attention on 75 fashion images. Binary ground-truth fixation maps for these images have been created based on the experimentally collected visual attention data using Gaussian blurring function. Saliency maps for these 75 fashion images were generated using multiple conventional saliency models as well as deep feature-based state-of-art models. DeepFeat has been studied extensively, with 44 sets of saliency maps, exploiting the features extracted from GoogLeNet and ResNet50. Seven other saliency models have also been utilized to predict saliency maps on these images. The results were compared over 5 evaluation metrics – AUC, CC, KL Divergence, NSS and SIM. The performance of all 8 saliency models on prediction of visual attention on fashion images over all five metrics were comparable to the benchmarked scores. Furthermore, the models perform well consistently over multiple evaluation metrics, thus indicating that saliency models could in fact be applied to effectively predict salient regions in random fashion advertisement images.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Madaris, Aaron T. "Characterization of Peripheral Lung Lesions by Statistical Image Processing of Endobronchial Ultrasound Images". Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1485517151147533.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Shajahan, Sunoj. "Agricultural Field Applications of Digital Image Processing Using an Open Source ImageJ Platform". Diss., North Dakota State University, 2019. https://hdl.handle.net/10365/29711.

Testo completo
Abstract (sommario):
Digital image processing is one of the potential technologies used in precision agriculture to gather information, such as seed emergence, plant health, and phenology from the digital images. Despite its potential, the rate of adoption is slow due to limited accessibility, unsuitability to specific issues, unaffordability, and high technical knowledge requirement from the clientele. Therefore, the development of open source image processing applications that are task-specific, easy-to-use, requiring fewer inputs, and rich with features will be beneficial to the users/farmers for adoption. The Fiji software, an open source free image processing ImageJ platform, was used in this application development study. A collection of four different agricultural field applications were selected to address the existing issues and develop image processing tools by applying novel approaches and simple mathematical principles. First, an automated application, using a digital image and “pixel-march” method, performed multiple radial measurements of sunflower floral components. At least 32 measurements for ray florets and eight for the disc were required statistically for accurate dimensions. Second, the color calibration of digital images addressed the light intensity variations of images using standard calibration chart and derived color calibration matrix from selected color patches. Calibration using just three-color patches: red, green, and blue was sufficient to obtain images of uniform intensity. Third, plant stand count and their spatial distribution from UAS images were determined with an accuracy of ≈96 %, through pixel-profile identification method and plant cluster segmentation. Fourth, the soybean phenological stages from the PhenoCam time-lapse imagery were analyzed and they matched with the manual visual observation. The green leaf index produced the minimum variations from its smoothed curve. The time of image capture and PhenoCam distances had significant effects on the vegetation indices analyzed. A simplified approach using kymograph was developed, which was quick and efficient for phenological observations. Based on the study, these tools can be equally applied to other scenarios, or new user-coded, user-friendly, image processing tools can be developed to address specific requirements. In conclusion, these successful results demonstrated the suitability and possibility of task-specific, open source, digital image processing tools development for agricultural field applications.
United States. Agricultural Research Service
National Institute of Food and Agriculture (U.S.)
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Elmowafy, Osama Mohammed Elsayed. "Image processing systems for TV image tracking". Thesis, University of Kent, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310164.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Karlsson, Simon, e Per Welander. "Generative Adversarial Networks for Image-to-Image Translation on Street View and MR Images". Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148475.

Testo completo
Abstract (sommario):
Generative Adversarial Networks (GANs) is a deep learning method that has been developed for synthesizing data. One application for which it can be used for is image-to-image translations. This could prove to be valuable when training deep neural networks for image classification tasks. Two areas where deep learning methods are used are automotive vision systems and medical imaging. Automotive vision systems are expected to handle a broad range of scenarios which demand training data with a high diversity. The scenarios in the medical field are fewer but the problem is instead that it is difficult, time consuming and expensive to collect training data. This thesis evaluates different GAN models by comparing synthetic MR images produced by the models against ground truth images. A perceptual study is also performed by an expert in the field. It is shown by the study that the implemented GAN models can synthesize visually realistic MR images. It is also shown that models producing more visually realistic synthetic images not necessarily have better results in quantitative error measurements, when compared to ground truth data. Along with the investigations on medical images, the thesis explores the possibilities of generating synthetic street view images of different resolution, light and weather conditions. Different GAN models have been compared, implemented with our own adjustments, and evaluated. The results show that it is possible to create visually realistic images for different translations and image resolutions.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Schultz, Leah Hastings Samantha K. "Image manipulation and user-supplied index terms". [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/permalink/meta-dc-9828.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Wang, Bin. "Pixel-parallel image processing techniques and algorithms". Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/pixelparallel-image-processing-techniques-and-algorithms(848f077c-4594-40f0-8dbe-8ac39fc69d0f).html.

Testo completo
Abstract (sommario):
The motivation of the research presented in this thesis is to investigate image processing algorithms utilising various SIMD parallel devices, especially massively parallel Cellular Processor Arrays (CPAs), to accelerate their processing speed. Various SIMD processors with different architectures are reviewed, and their features are analysed. The different types of parallelisms contained in image processing tasks are also analysed, and the methodologies to exploit date-level parallelisms are discussed. The efficiency of the pixel-per-processor architecture used in computer vision scenarios are discussed, as well as its limitations. Aiming to solve the problem that CPA array dimensions are usually smaller than the resolution of the images needed to be processed, a “coarse grain mapping method” is proposed. It provides the CPAs with the ability of processing images with higher resolution than the arrays themselves by allowing CPAs to process multiple pixels per processing element. It is completely software based, easy to implement, and easy to program. To demonstrate the efficiency of pixel-level parallel approach, two image processing algorithms specially designed for pixel-per-processor arrays are proposed: a parallel skeletonization algorithm based on two-layer trigger-wave propagation, and a parallel background detection algorithm. Implementations of the proposed algorithms using different platforms (i.e. CPU, GPU and CPA) are proposed and evaluated. Evaluation results indicate that the proposed algorithms have advantages both in term of processing speed and result quality. This thesis concludes that pixel-per-processor architecture can be used in image processing (or computer vision) algorithms which emphasize analysing pixel-level information, to significantly boost the processing speed of these algorithms.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Mohd, Padzil Fatihah. "Linear and nonlinear filter for image processing using MATLAB's image processing toolbox". Thesis, Mohd Padzil, Fatihah (2016) Linear and nonlinear filter for image processing using MATLAB's image processing toolbox. Honours thesis, Murdoch University, 2016. https://researchrepository.murdoch.edu.au/id/eprint/30815/.

Testo completo
Abstract (sommario):
The proposal of the thesis is basically to study techniques in digital image processing. This thesis will cover two image processing areas, which are image restoration and image enhancement. More specifically, image restoration will involve the removal of noise and image enhancement will look into technique for edge enhancement. In this project, two classes of filter will be introduced, which are linear and nonlinear filters. Two type of noise source will be used which are Gaussian noise and salt and pepper noise. For noise removal, the mean filter is used as example of a linear filter and the median filter is used as an example of a nonlinear filter. For edge enhancement, only a linear filter is used, which is the unsharp mask filter. The simulation programs are written using the Image Processing Toolbox in MATLAB (MATrix LABoratory). Test images that corrupted by noise will be used in investigations to assess the strength and weakness for each type of filter.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Pérez, Benito Cristina. "Color Image Processing based on Graph Theory". Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/123955.

Testo completo
Abstract (sommario):
[ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.
[CAT] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.
[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.
Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955
TESIS
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Bibby, Geoffrey Thomas. "Digital image processing using parallel processing techniques". Thesis, Liverpool John Moores University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304539.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia