To see the other types of publications on this topic, follow the link: Pixels – Classification.

Dissertations / Theses on the topic 'Pixels – Classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Pixels – Classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Faraklioti, M. "Classification of sets of mixed pixels in remote sensing." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/844613/.

Full text
Abstract:
Recently, remotely sensed multispectral data have been proved to be very useful for many applications in the field of Earth surveys. For certain applications, however, limits in the spatial resolution of satellite sensors and variation in ground surface restrict the usefulness of the available data, since the observed spectral signature of the pixels is the result of a number of surface materials found in the area of the pixel. Two mixed pixel classification techniques which have shown high correlation with vegetation coverage of single pixels are described in this thesis: the vegetation indices and the linear mixing model. The two approaches are adjusted in order to deal with sets of pixels and not individual pixels. The sets of pixels are treated as statistical distributions and moments can be estimated. The vegetation indices and the linear mixing model can then be expressed in terms of these statistics. The illumination direction is an important factor that should be taken into account in mixed pixel classification, since it modifies the statistics of the distributions of pixels, and has received no attention until now. The effect of illumination on the relation between the vegetation indices and the proportion of sets of mixed pixels is examined. It is demonstrated that some vegetation indices, which are defined from the ratio of statistics in two spectral bands, can be considered relatively invariant to illumination changes. Finally, a new illumination invariant mixing model is proposed which is expressed in terms of some photometric invariant statistics. It is shown to perform very well and it can be used to un-mix accurately sets of pixels under many illumination angles. The newly introduced mixing model can be considered a suitable choice in the mixed pixel classification field. Key words: Mixed pixels, sets of pixels, vegetation index, illumination invariants.
APA, Harvard, Vancouver, ISO, and other styles
2

Ghimire, Santosh. "Classification of image pixels based on minimum distance and hypothesis testing." Kansas State University, 2011. http://hdl.handle.net/2097/8547.

Full text
Abstract:
Master of Science<br>Department of Statistics<br>Haiyan Wang<br>We introduce a new classification method that is applicable to classify image pixels. This work was motivated by the test-based classification (TBC) introduced by Liao and Akritas(2007). We found that direct application of TBC on image pixel classification can lead to high mis-classification rate. We propose a method that combines the minimum distance and evidence from hypothesis testing to classify image pixels. The method is implemented in R programming language. Our method eliminates the drawback of Liao and Akritas (2007).Extensive experiments show that our modified method works better in the classification of image pixels in comparison with some standard methods of classification; namely, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Classification Tree(CT), Polyclass classification, and TBC. We demonstrate that our method works well in the case of both grayscale and color images.
APA, Harvard, Vancouver, ISO, and other styles
3

Samuelsson, Emil. "Classification of skin pixels in images : Using feature recognition and threshold segmentation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155400.

Full text
Abstract:
The purpose of this report is to investigate and answer the research question: How can current skin segmentation thresholding methods be improved in terms of precision, accuracy, and efficiency by using feature recognition, pre- and post-processing? In this work, a novel algorithm is presented for classification of skin pixels in images. Different pre-processing methods were evaluated to improve the overall performance of the algorithm. Mainly, the methods of image smoothing, and histogram equalization were tested. Using a Gaussian kernel and contrast limited adaptive histogram equalization (CLAHE) was found to give the best result. A face recognition technique based on learned face features were used to identify a skin color range for each image. Threshold segmentation was then used, based on the obtained skin color range, to extract a skin map for each image. The skin maps were improved by testing a morphology method called closing and by using contour detection for an elimination of large false skin structures within skin regions. The skin maps were then evaluated by calculating the precision, recall, accuracy, and f-measure using a ground truth dataset called Pratheepan. This novel approach was compared to previous work in the field and obtained a considerable higher result. Thus, the algorithm is an improvement compared to previous work within the field.
APA, Harvard, Vancouver, ISO, and other styles
4

Bernard, Alice Clara. "The identification of sub-pixel components from remotely sensed data : an evaluation of an artificial neural network approach." Thesis, Durham University, 1998. http://etheses.dur.ac.uk/5045/.

Full text
Abstract:
Until recently, methodologies to extract sub-pixel information from remotely sensed data have focused on linear un-mixing models and so called fuzzy classifiers. Recent research has suggested that neural networks have the potential for providing sub- pixel information. Neural networks offer an attractive alternative as they are non- parametric, they are not restricted to any number of classes, they do not assume that the spectral signatures of pixel components mix linearly and they do not necessarily have to be trained with pure pixels. The thesis tests the validity of neural networks for extracting sub-pixel information using a combination of qualitative and quantitative analysis tools. Previously published experiments use data sets that are often limited in terms of numbers of pixels and numbers of classes. The data sets used in the thesis reflect the complexity of the landscape. Preparation for the experiments is canied out by analysing the data sets and establishing that the network is not sensitive to particular choices of parameters. Classification results using a conventional type of target with which to train the network show that the response of the network to mixed pixels is different from the response of the network to pure pixels. Different target types are then tested. Although targets which provide detailed compositional information produce higher accuracies of classification for subsidiary classes, there is a trade off between the added information and added complexity which can decrease classification accuracy. Overall, the results show that the network seems to be able to identify the classes that are present within pixels but not their proportions. Experiments with a very accurate data set show that the network behaves like a pattern matching algorithm and requires examples of mixed pixels in the training data set in order to estimate pixel compositions for unseen pixels. The network does not function like an unmixing model and cannot interpolate between pure classes.
APA, Harvard, Vancouver, ISO, and other styles
5

Samaei, Amiryousef. "Evaluating the effect of different distances on the pixels per object and image classification." Thesis, Mittuniversitetet, Avdelningen för elektronikkonstruktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25880.

Full text
Abstract:
In the last decades camera systems have continuously evolved and have found wide range of applications. One of the main applications of a modern camera system is surveillance in outdoor areas. The camera system, based on local computations, can detect and classify objects autonomously. However, the distance of the objects from the camera plays a vital role on the classification results. This could be specially challenging when lighting conditions are varying. Therefore, in this thesis, we are examining the effect of changing dis-tances on object in terms of number of pixels. In addition, the effect of distance on classification is studied by preparing four different sets. For consideration of high signal to noise ratio, we are integrating thermal and visual image sensors for the same test in order to achieve better spectral resolution. In this study, four different data sets, thermal, visu-al, binary from visual and binary from thermal have been prepared to train the classifier. The categorized objects include bicycle, human and vehicle. Comparative studies have been performed in order to identify the data sets accuracy. It has been demonstrated that for fixed distances bi-level data sets, obtained from visual images, have better accuracy. By using our setup, the object (human) with a length of 179 and width of 30 has been classified correctly with minor error up to 150 meters for thermal, visual as well as binary from visual. Moreover, for bi-level images from thermal, the human object has been correctly classified as far away as 250 meters.
APA, Harvard, Vancouver, ISO, and other styles
6

Villa, Alberto. "Advanced spectral unmixing and classification methods for hyperspectral remote sensing data." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00767250.

Full text
Abstract:
La thèse propose des nouvelles techniques pour la classification et le démelange spectraldes images obtenus par télédétection iperspectrale. Les problèmes liées au données (notammenttrès grande dimensionalité, présence de mélanges des pixels) ont été considerés et destechniques innovantes pour résoudre ces problèmes. Nouvelles méthodes de classi_cationavancées basées sur l'utilisation des méthodes traditionnel de réduction des dimension etl'integration de l'information spatiale ont été développés. De plus, les méthodes de démelangespectral ont été utilisés conjointement pour ameliorer la classification obtenu avec lesméthodes traditionnel, donnant la possibilité d'obtenir aussi une amélioration de la résolutionspatial des maps de classification grace à l'utilisation de l'information à niveau sous-pixel.Les travaux ont suivi une progression logique, avec les étapes suivantes:1. Constat de base: pour améliorer la classification d'imagerie hyperspectrale, il fautconsidérer les problèmes liées au données : très grande dimensionalité, presence demélanges des pixels.2. Peut-on développer méthodes de classi_cation avancées basées sur l'utilisation des méthodestraditionnel de réduction des dimension (ICA ou autre)?3. Comment utiliser les differents types d'information contextuel typique des imagés satellitaires?4. Peut-on utiliser l'information données par les méthodes de démelange spectral pourproposer nouvelles chaines de réduction des dimension?5. Est-ce qu'on peut utiliser conjointement les méthodes de démelange spectral pour ameliorerla classification obtenu avec les méthodes traditionnel?6. Peut-on obtenir une amélioration de la résolution spatial des maps de classi_cationgrace à l'utilisation de l'information à niveau sous-pixel?Les différents méthodes proposées ont été testées sur plusieurs jeux de données réelles, montrantresultats comparable ou meilleurs de la plus part des methodes presentés dans la litterature.
APA, Harvard, Vancouver, ISO, and other styles
7

Attia, Dhouha. "Segmentation d'images par combinaison adaptative couleur-texture et classification de pixels. : Applications à la caractérisation de l'environnement de réception de signaux GNSS." Thesis, Belfort-Montbéliard, 2013. http://www.theses.fr/2013BELF0209/document.

Full text
Abstract:
En segmentation d’images, les informations de couleur et de texture sont très utilisées. Le premier apport de cette thèse se situe au niveau de l’utilisation conjointe de ces deux sources d’informations. Nous proposons alors une méthode de combinaison couleur/texture, adaptative et non paramétrique, qui consiste à combiner un (ou plus) gradient couleur et un (ou plus) gradient texture pour ensuite générer un gradient structurel utilisé comme image de potentiel dans l’algorithme de croissance de régions par LPE. L’originalité de notre méthode réside dans l’étude de la dispersion d’un nuage de point 3D dans l’espace, en utilisant une étude comparative des valeurs propres obtenues par une analyse des composantes principales de la matrice de covariance de ce nuage de points. L’approche de combinaison couleur/texture proposée est d’abord testée sur deux bases d’images, à savoir la base générique d’images couleur de BERKELEY et la base d’images de texture VISTEX. Cette thèse s’inscrivant dans le cadre des projets ViLoc (RFC) et CAPLOC (PREDIT), le deuxième apport de celle-ci se situe au niveau de la caractérisation de l’environnement de réception des signaux GNSS pour améliorer le calcul de la position d’un mobile en milieu urbain. Dans ce cadre, nous proposons d’exclure certains satellites (NLOS dont les signaux sont reçus par réflexion voir totalement bloqués par les obstacles environnants) dans le calcul de la position d’un mobile. Deux approches de caractérisation, basées sur le traitement d’images, sont alors proposées. La première approche consiste à appliquer la méthode de combinaison couleur/texture proposée sur deux bases d’images réelles acquises en mobilité, à l’aide d’une caméra fisheye installée sur le toit du véhicule de laboratoire, suivie d’une classification binaire permettant d’obtenir les deux classes d’intérêt « ciel » (signaux LOS) et « non ciel » (signaux NLOS). Afin de satisfaire la contrainte temps réel exigée par le projet CAPLOC, nous avons proposé une deuxième approche basée sur une simplification de l’image couplée à une classification pixellaire adaptée. Le principe d’exclusion des satellites NLOS permet d’améliorer la précision de la position estimée, mais uniquement lorsque les satellites LOS (dont les signaux sont reçus de manière direct) sont géométriquement bien distribués dans l’espace. Dans le but de prendre en compte cette connaissance relative à la distribution des satellites, et par conséquent, améliorer la précision de localisation, nous avons proposé une nouvelle stratégie pour l’estimation de position, basée sur l’exclusion des satellites NLOS (identifiés par le traitement d’images), conditionnée par l’information DOP, contenue dans les trames GPS<br>Color and texture are two main information used in image segmentation. The first contribution of this thesis focuses on the joint use of color and texture information by developing a robust and non parametric method combining color and texture gradients. The proposed color/texture combination allows defining a structural gradient that is used as potential image in watershed algorithm. The originality of the proposed method consists in studying a 3D points cloud generated by color and texture descriptors, followed by an eigenvalue analysis. The color/texture combination method is firstly tested and compared with well known methods in the literature, using two databases (generic BERKELEY database of color images and the VISTEX database of texture images). The applied part of the thesis is within ViLoc project (funded by RFC regional council) and CAPLOC project (funded by PREDIT). In this framework, the second contribution of the thesis concerns the characterization of the environment of GNSS signals reception. In this part, we aim to improve estimated position of a mobile in urban environment by excluding NLOS satellites (for which the signal is masked or received after reflections on obstacles surrounding the antenna environment). For that, we propose two approaches to characterize the environment of GNSS signals reception using image processing. The first one consists in applying the proposed color/texture combination on images acquired in mobility with a fisheye camera located on the roof of a vehicle and oriented toward the sky. The segmentation step is followed by a binary classification to extract two classes « sky » (LOS signals) and « not sky » (NLOS signals). The second approach is proposed in order to satisfy the real-time constraint required by the application. This approach is based on image simplification and adaptive pixel classification. The NLOS satellites exclusion principle is interesting, in terms of improving precision of position, when the LOS satellites (for which the signals are received directly) are well geometrically distributed in space. To take into account the knowledge of satellite distribution and then increase the precision of position, we propose a new strategy of position estimation, based on the exclusion of NLOS satellites (identified by the image processing step), conditioned by DOP information, which is provided by GPS data
APA, Harvard, Vancouver, ISO, and other styles
8

Vandenbroucke, Nicolas Postaire Jack-Gérard. "Segmentation d'images couleur par classification de pixels dans des espaces d'attributs colorimétriques adaptés application à l'analyse d'images de football /." [S.l.] : [s.n.], 2000. http://www.univ-lille1.fr/bustl-grisemine/pdf/extheses/50376-2000-404-405.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vandenbroucke, Nicolas. "Segmentation d'images couleur par classification de pixels dans des espaces d'attributs colorimétriques adaptés : application à l'analyse d'images de football." Lille 1, 2000. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2000/50376-2000-404.pdf.

Full text
Abstract:
Dans le cadre de l'analyse d'images de football, nous proposons une methodologie originale de segmentation d'images couleur en regions qui exploite les proprietes colorimetriques des pixels pour extraire de l'image les joueurs a suivre. Les pixels de chaque image sont affectes a differentes classes selon qu'ils representent le terrain, un joueur de l'une des deux equipes, un des deux gardiens de but ou un arbitre en utilisant des methodes classiques de classification de donnees multidimensionnelles fondees sur un apprentissage supervise. La couleur de chaque pixel est usuellement representee sur la base des trois composantes trichromatiques rouge, verte et bleue, mais peut etre codee dans d'autres systemes de representation que nous avons regroupes par familles en fonction de leurs differentes proprietes. L'originalite de notre approche consiste a construire un espace couleur hybride en selectionnant les composantes couleur les mieux adaptees aux classes de pixels a retrouver et pouvant etre issues de differents systemes. Pour cela, nous utilisons une methode d'analyse discriminante associee a des criteres informationnels de discrimination. Cette approche est generalisee en considerant qu'un pixel est represente par des attributs colorimetriques evalues a son voisinage. Il est ainsi possible de proposer une liste d'attributs calcules pour chacune des composantes couleur des systemes de representation. Le voisinage dans lequel sont calcules ces attributs colorimetriques permet de definir une texture couleur et de restituer ainsi les relations de connexite entre les pixels voisins. Les attributs colorimetriques les plus discriminants sont regroupes au sein d'un espace d'attributs colorimetriques adapte a la classification.
APA, Harvard, Vancouver, ISO, and other styles
10

Souza, César Salgado Vieira de. "Classify-normalize-classify : a novel data-driven framework for classifying forest pixels in remote sensing images." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/158390.

Full text
Abstract:
O monitoramento do meio ambiente e suas mudanças requer a análise de uma grade quantidade de imagens muitas vezes coletadas por satélites. No entanto, variações nos sinais devido a mudanças nas condições atmosféricas frequentemente resultam num deslocamento da distribuição dos dados para diferentes locais e datas. Isso torna difícil a distinção dentre as várias classes de uma base de dados construída a partir de várias imagens. Neste trabalho introduzimos uma nova abordagem de classificação supervisionada, chamada Classifica-Normaliza-Classifica (CNC), para amenizar o problema de deslocamento dos dados. A proposta é implementada usando dois classificadores. O primeiro é treinado em imagens não normalizadas de refletância de topo de atmosfera para distinguir dentre pixels de uma classe de interesse (CDI) e pixels de outras categorias (e.g. floresta versus não-floresta). Dada uma nova imagem de teste, o primeiro classificador gera uma segmentação das regiões da CDI e então um vetor mediano é calculado para os valores espectrais dessas áreas. Então, esse vetor é subtraído de cada pixel da imagem e portanto fixa a distribuição de dados de diferentes imagens num mesmo referencial. Finalmente, o segundo classificador, que é treinado para minimizar o erro de classificação em imagens já centralizadas pela mediana, é aplicado na imagem de teste normalizada no segundo passo para produzir a segmentação binária final. A metodologia proposta foi testada para detectar desflorestamento em pares de imagens co-registradas da Landsat 8 OLI sobre a floresta Amazônica. Experimentos usando imagens multiespectrais de refletância de topo de atmosfera mostraram que a CNC obteve maior acurácia na detecção de desflorestamento do que classificadores aplicados em imagens de refletância de superfície fornecidas pelo United States Geological Survey. As acurácias do método proposto também se mostraram superiores às obtidas pelas máscaras de desflorestamento do programa PRODES.<br>Monitoring natural environments and their changes over time requires the analysis of a large amount of image data, often collected by orbital remote sensing platforms. However, variations in the observed signals due to changing atmospheric conditions often result in a data distribution shift for different dates and locations making it difficult to discriminate between various classes in a dataset built from several images. This work introduces a novel supervised classification framework, called Classify-Normalize-Classify (CNC), to alleviate this data shift issue. The proposed scheme uses a two classifier approach. The first classifier is trained on non-normalized top-of-the-atmosphere reflectance samples to discriminate between pixels belonging to a class of interest (COI) and pixels from other categories (e.g. forest vs. non-forest). At test time, the estimated COI’s multivariate median signal, derived from the first classifier segmentation, is subtracted from the image and thus anchoring the data distribution from different images to the same reference. Then, a second classifier, pre-trained to minimize the classification error on COI median centered samples, is applied to the median-normalized test image to produce the final binary segmentation. The proposed methodology was tested to detect deforestation using bitemporal Landsat 8 OLI images over the Amazon rainforest. Experiments using top-of-the-atmosphere multispectral reflectance images showed that the deforestation was mapped by the CNC framework more accurately as compared to running a single classifier on surface reflectance images provided by the United States Geological Survey (USGS). Accuracies from the proposed framework also compared favorably with the benchmark masks of the PRODES program.
APA, Harvard, Vancouver, ISO, and other styles
11

ATTIA, Dhouha. "Segmentation d'images par combinaison adaptative couleur-texture et classification de pixels. : Applications à la caractérisation de l'environnement de réception de signaux GNSS." Phd thesis, Université de Technologie de Belfort-Montbeliard, 2013. http://tel.archives-ouvertes.fr/tel-01001748.

Full text
Abstract:
En segmentation d'images, les informations de couleur et de texture sont très utilisées. Le premier apport de cette thèse se situe au niveau de l'utilisation conjointe de ces deux sources d'informations. Nous proposons alors une méthode de combinaison couleur/texture, adaptative et non paramétrique, qui consiste à combiner un (ou plus) gradient couleur et un (ou plus) gradient texture pour ensuite générer un gradient structurel utilisé comme image de potentiel dans l'algorithme de croissance de régions par LPE. L'originalité de notre méthode réside dans l'étude de la dispersion d'un nuage de point 3D dans l'espace, en utilisant une étude comparative des valeurs propres obtenues par une analyse des composantes principales de la matrice de covariance de ce nuage de points. L'approche de combinaison couleur/texture proposée est d'abord testée sur deux bases d'images, à savoir la base générique d'images couleur de BERKELEY et la base d'images de texture VISTEX. Cette thèse s'inscrivant dans le cadre des projets ViLoc (RFC) et CAPLOC (PREDIT), le deuxième apport de celle-ci se situe au niveau de la caractérisation de l'environnement de réception des signaux GNSS pour améliorer le calcul de la position d'un mobile en milieu urbain. Dans ce cadre, nous proposons d'exclure certains satellites (NLOS dont les signaux sont reçus par réflexion voir totalement bloqués par les obstacles environnants) dans le calcul de la position d'un mobile. Deux approches de caractérisation, basées sur le traitement d'images, sont alors proposées. La première approche consiste à appliquer la méthode de combinaison couleur/texture proposée sur deux bases d'images réelles acquises en mobilité, à l'aide d'une caméra fisheye installée sur le toit du véhicule de laboratoire, suivie d'une classification binaire permettant d'obtenir les deux classes d'intérêt " ciel " (signaux LOS) et " non ciel " (signaux NLOS). Afin de satisfaire la contrainte temps réel exigée par le projet CAPLOC, nous avons proposé une deuxième approche basée sur une simplification de l'image couplée à une classification pixellaire adaptée. Le principe d'exclusion des satellites NLOS permet d'améliorer la précision de la position estimée, mais uniquement lorsque les satellites LOS (dont les signaux sont reçus de manière direct) sont géométriquement bien distribués dans l'espace. Dans le but de prendre en compte cette connaissance relative à la distribution des satellites, et par conséquent, améliorer la précision de localisation, nous avons proposé une nouvelle stratégie pour l'estimation de position, basée sur l'exclusion des satellites NLOS (identifiés par le traitement d'images), conditionnée par l'information DOP, contenue dans les trames GPS.
APA, Harvard, Vancouver, ISO, and other styles
12

SADKI, MUSTAPHA. "Detection et segmentation d'objets d'interet en imagerie 2d et 3d par classification automatique des pixels et optimisation sous contraintes geometriques de contours deformables." Université Louis Pasteur (Strasbourg) (1971-2008), 1997. http://www.theses.fr/1997STR13270.

Full text
Abstract:
Cette these propose une methodologie et des algorithmes de detection et de segmentation d'objets d'interet par classification automatique des pixels et optimisation sous contraintes geometriques de contours deformables, qui ont ete appliques avec succes a la detection d'anomalies en imagerie mammographique, a l'extraction automatique de stenoses dans des images tomodensitometriques 3d et a l'extraction de franges dans des images de moire inverse pour la saisie de formes 3d dans le domaine de la metrologie. Du point de vue de l'analyse d'images, l'objectif est de montrer qu'il est possible de resoudre ces problemes de vision par ordinateur, sans introduire de considerations avancees ou de connaissances specifiques du domaine d'application considere, aussi bien dans le cas de la mammographie et de la tomodensitometrique que dans celui de la saisie de formes 3d par moire, en les traitant comme des problemes de perception visuelle humaine simulables par des algorithmes de traitement d'images et d'analyse de donnees multidimensionnelles. C'est grace a cette propriete d'independance par rapport au domaine d'application que les algorithmes presentes dans cette these ont ete utilises avec succes aussi bien en imagerie mammographique, qu'en tomodensitometrique 3d par scanner x ou en metrologie de formes 3d par analyse d'images de moire inverse.
APA, Harvard, Vancouver, ISO, and other styles
13

Zbib, Hiba. "Segmentation d'images TEP dynamiques par classification spectrale automatique et déterministe." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR3317/document.

Full text
Abstract:
La quantification d’images TEP dynamiques est un outil performant pour l’étude in vivo de la fonctionnalité des tissus. Cependant, cette quantification nécessite une définition des régions d’intérêts pour l’extraction des courbes temps-activité. Ces régions sont généralement identifiées manuellement par un opérateur expert, ce qui renforce leur subjectivité. En conséquent, un intérêt croissant a été porté sur le développement de méthodes de classification. Ces méthodes visent à séparer l’image TEP en des régions fonctionnelles en se basant sur les profils temporels des voxels. Dans cette thèse, une méthode de classification spectrale des profils temporels des voxels est développée. Elle est caractérisée par son pouvoir de séparer des classes non linéaires. La méthode est ensuite étendue afin de la rendre utilisable en routine clinique. Premièrement une procédure de recherche globale est utilisée pour localiser d’une façon déterministe les centres optimaux des données projetées. Deuxièmement, un critère non supervisé de qualité de segmentation est proposé puis optimisé par le recuit simulé pour estimer automatiquement le paramètre d’échelle et les poids temporels associés à la méthode. La méthode de classification spectrale automatique et déterministe proposée est validée sur des images simulées et réelles et comparée à deux autres méthodes de segmentation de la littérature. Elle a présenté une amélioration de la définition des régions et elle paraît un outil prometteur pouvant être appliqué avant toute tâche de quantification ou d’estimation de la fonction d’entrée artérielle<br>Quantification of dynamic PET images is a powerful tool for the in vivo study of the functionality of tissues. However, this quantification requires the definition of regions of interest for extracting the time activity curves. These regions are usually identified manually by an expert operator, which reinforces their subjectivity. As a result, there is a growing interest in the development of clustering methods that aim to separate the dynamic PET sequence into functional regions based on the temporal profiles of voxels. In this thesis, a spectral clustering method of the temporal profiles of voxels that has the advantage of handling nonlinear clusters is developed. The method is extended to make it more suited for clinical applications. First, a global search procedure is used to locate in a deterministic way the optimal cluster centroids from the projected data. Second an unsupervised clustering criterion is proposed and optimised by the simulated annealing to automatically estimate the scale parameter and the weighting factors involved in the method. The proposed automatic and deterministic spectral clustering method is validated on simulated and real images and compared to two other segmentation methods from the literature. It improves the ROI definition, and appears as a promising pre-processing tool before ROI-based quantification and input function estimation tasks
APA, Harvard, Vancouver, ISO, and other styles
14

Gillet, Aymeric. "Détection des modes par opérateurs morphologiques flous pour la segmentation d'images couleurs." Lille 1, 2004. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2004/50376-2004-Gillet.pdf.

Full text
Abstract:
Ce document présente une méthode originale de segmentation d'images couleurs utilisant la morphologie mathématique floue appliquée à l'histogramme couleur 3D. La segmentation consiste à détecter les différents modes qui sont présents dans l'histogramme couleur 3D, afin de construire les classes de pixels caractérisant les régions homogènes contenues dans l'image. Pour cela, nous montrons comment chaque point de l'histogramme couleur 3D associé à un ou plusieurs pixels de l'image couleur peut appartenir plus ou moins à un mode. Nous définissons alors le sous-ensemble flou mode caractérisé par sa fonction d'appartenance évaluant le degré d'appartenance du point considéré à un mode. Cette fonction est évaluée par une analyse de la concavité de l'histogramme couleur 3D. Une transformation morphologique floue originale est ensuite appliquée à cette fonction d'appartenance à un mode afin d'en détecter les points de l'histogramme couleur appartenant à un mode. L'efficacité de notre approche est ensuite illustrée par différentes images couleurs
APA, Harvard, Vancouver, ISO, and other styles
15

Nyman, Joakim. "Pixel classification of hyperspectral images." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353175.

Full text
Abstract:
For sugar producers, it is a major problem to detect contamination of sugar. Doing it manually would not be feasible because of the high demand and would require toomuch labor. This report evaluates if the problem can be solved by using a hyperspectral camera operating in a wavelength range of 400-1000 nm and a spectralresolution of 224. Using the machine learning algorithms Artificial Neural Networkand Support Vector Machine, models were trained on pixels labeled as sugar or different materials of contamination. An autonomous system could be developed to analyze the sugar in real time and remove the contaminated sugar. This paper presents the results from using both Artificial Neural Networks as well as SupportVector Machine. It also addresses the impact of the pre-processing techniques filtering and maximum normalization when applying machine learning algorithms. The results showed that the accuracy can be significantly increased by using a hyperspectral camera instead of a normal camera, especially for plastic materials where using anormal camera gave a precision and recall score of 0.0 while using the hyperspectral camera gave above 0.9. Support Vector Machine performed slightly better than using Artificial Neural Network, especially for plastic material. The filtering and themaximum normalization did not increase the accuracy and could therefore be omitted in favor for performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Newell, John T. "Pixel classification by morphological granulometric features /." Online version of thesis, 1991. http://hdl.handle.net/1850/11210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bosdogianni, Panagiota. "Mixed pixel classification in remote sensing." Thesis, University of Surrey, 1996. http://epubs.surrey.ac.uk/843999/.

Full text
Abstract:
This thesis is concerned with the problem of mixed pixel classification in Remote sensing applications and attempts to find accurate and robust solutions to this problem. The application we are interested in, is to monitor burned forest regions for a few years after the fire in order to identify the type of vegetation present in these areas and consequently assess the danger of desertification. The areas of interest are semi-arid where the vegetation tends to vary at smaller scales than the area covered by a single Landsat TM pixel, thus mixed pixels are quite common. In this thesis we considered whole sets of mixed pixels. First, an overview of the methods currently used to solve the mixed pixel classification problem is presented, focused on the linear mixing model which is adopted in this thesis. Then a method that incorporates higher order moments of the distributions of the pure and the mixed classes is proposed. This method is shown to augment the number of equations used for the classification and theoretically it allows the specification of more cover classes than there are bands available, without compromising the accuracy of the results. The problem of deterioration of the classification performance, due to inaccuracies in calculation of the statistics when outliers are present, is also examined. The use of the Hough Transform is proposed for the linear unmixing in order to provide robust estimates even in cases where outliers are present. The Hough transform method though, is an exhaustive method and therefore has higher computational complexity. Furthermore, its performance, in the absence of outliers, is not as good as the solution obtained by the Least Squares Error method. Hence, the Randomized Hough Transform is proposed in order to improve the computational speed and maintain the same level of performance, while the Hypothesis Testing Hough Transform is proposed to improve the accuracy of the classification results. All the methods proposed in this thesis have been compared with the Least Squares Error method using simulated and real Landsat TM image data, in order to illustrate the validity and usefulness of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Bengali, Umme Salma Yusuf. "Pixel classification of iris transillumination defects." Thesis, University of Iowa, 2012. https://ir.uiowa.edu/etd/3260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Jiang, Shiguo. "Estimating Per-pixel Classification Confidence of Remote Sensing Images." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354557859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Stewart, Seth Andrew. "Fully Convolutional Neural Networks for Pixel Classification in Historical Document Images." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7064.

Full text
Abstract:
We use a Fully Convolutional Neural Network (FCNN) to classify pixels in historical document images, enabling the extraction of high-quality, pixel-precise and semantically consistent layers of masked content. We also analyze a dataset of hand-labeled historical form images of unprecedented detail and complexity. The semantic categories we consider in this new dataset include handwriting, machine-printed text, dotted and solid lines, and stamps. Segmentation of document images into distinct layers allows handwriting, machine print, and other content to be processed and recognized discriminatively, and therefore more intelligently than might be possible with content-unaware methods. We show that an efficient FCNN with relatively few parameters can accurately segment documents having similar textural content when trained on a single representative pixel-labeled document image, even when layouts differ significantly. In contrast to the overwhelming majority of existing semantic segmentation approaches, we allow multiple labels to be predicted per pixel location, which allows for direct prediction and reconstruction of overlapped content. We perform an analysis of prevalent pixel-wise performance measures, and show that several popular performance measures can be manipulated adversarially, yielding arbitrarily high measures based on the type of bias used to generate the ground-truth. We propose a solution to the gaming problem by comparing absolute performance to an estimated human level of performance. We also present results on a recent international competition requiring the automatic annotation of billions of pixels, in which our method took first place.
APA, Harvard, Vancouver, ISO, and other styles
21

Ali, Fadi. "Urban classification by pixel and object-based approaches for very high resolution imagery." Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-23993.

Full text
Abstract:
Recently, there is a tremendous amount of high resolution imagery that wasn’t available years ago, mainly because of the advancement of the technology in capturing such images. Most of the very high resolution (VHR) imagery comes in three bands only the red, green and blue (RGB), whereas, the importance of using such imagery in remote sensing studies has been only considered lately, despite that, there are no enough studies examining the usefulness of these imagery in urban applications. This research proposes a method to investigate high resolution imagery to analyse an urban area using UAV imagery for land use and land cover classification. Remote sensing imagery comes in various characteristics and format from different sources, most commonly from satellite and airborne platforms. Recently, unmanned aerial vehicles (UAVs) have become a very good potential source to collect geographic data with new unique properties, most important asset is the VHR of spatiotemporal data structure. UAV systems are as a promising technology that will advance not only remote sensing but GIScience as well. UAVs imagery has been gaining popularity in the last decade for various remote sensing and GIS applications in general, and particularly in image analysis and classification. One of the concerns of UAV imagery is finding an optimal approach to classify UAV imagery which is usually hard to define, because many variables are involved in the process such as the properties of the image source and purpose of the classification. The main objective of this research is evaluating land use / land cover (LULC) classification for urban areas, whereas the data of the study area consists of VHR imagery of RGB bands collected by a basic, off-shelf and simple UAV. LULC classification was conducted by pixel and object-based approaches, where supervised algorithms were used for both approaches to classify the image. In pixel-based image analysis, three different algorithms were used to create a final classified map, where one algorithm was used in the object-based image analysis. The study also tested the effectiveness of object-based approach instead of pixel-based in order to minimize the difficulty in classifying mixed pixels in VHR imagery, while identifying all possible classes in the scene and maintain the high accuracy. Both approaches were applied to a UAV image with three spectral bands (red, green and blue), in addition to a DEM layer that was added later to the image as ancillary data. Previous studies of comparing pixel-based and object-based classification approaches claims that object-based had produced better results of classes for VHR imagery. Meanwhile several trade-offs are being made when selecting a classification approach that varies from different perspectives and factors such as time cost, trial and error, and subjectivity.       Classification based on pixels was approached in this study through supervised learning algorithms, where the classification process included all necessary steps such as selecting representative training samples and creating a spectral signature file. The process in object-based classification included segmenting the UAV’s imagery and creating class rules by using feature extraction. In addition, the incorporation of hue, saturation and intensity (IHS) colour domain and Principle Component Analysis (PCA) layers were tested to evaluate the ability of such method to produce better results of classes for simple UAVs imagery. These UAVs are usually equipped with only RGB colour sensors, where combining more derived colour bands such as IHS has been proven useful in prior studies for object-based image analysis (OBIA) of UAV’s imagery, however, incorporating the IHS domain and PCA layers in this research did not provide much better classes. For the pixel-based classification approach, it was found that Maximum Likelihood algorithm performs better for VHR of UAV imagery than the other two algorithms, the Minimum Distance and Mahalanobis Distance. The difference in the overall accuracy for all algorithms in the pixel-based approach was obvious, where the values for Maximum Likelihood, Minimum Distance and Mahalanobis Distance were respectively as 86%, 80% and 76%. The Average Precision (AP) measure was calculated to compare between the pixel and object-based approaches, the result was higher in the object-based approach when applied for the buildings class, the AP measure for object-based classification was 0.9621 and 0.9152 for pixel-based classification. The results revealed that pixel-based classification is still effective and can be applicable for UAV imagery, however, the object-based classification that was done by the Nearest Neighbour algorithm has produced more appealing classes with higher accuracy. Also, it was concluded that OBIA has more power for extracting geographic information and easier integration within the GIS, whereas the result of this research is estimated to be applicable for classifying UAV’s imagery used for LULC applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Fischer, Manfred M., and Petra Staufer-Steinnocher. "Optimization in an Error Backpropagation Neural Network Environment with a Performance Test on a Pattern Classification Problem." WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/4150/1/WSG_DP_6298.pdf.

Full text
Abstract:
Various techniques of optimizing the multiple class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or generalization performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in generalisation is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid inacceptable instabilities in the generalization results. (authors' abstract)<br>Series: Discussion Papers of the Institute for Economic Geography and GIScience
APA, Harvard, Vancouver, ISO, and other styles
23

Fischer, Manfred M., Sucharita Gopal, Petra Staufer-Steinnocher, and Klaus Steinocher. "Evaluation of Neural Pattern Classifiers for a Remote Sensing Application." WU Vienna University of Economics and Business, 1995. http://epub.wu.ac.at/4184/1/WSG_DP_4695.pdf.

Full text
Abstract:
This paper evaluates the classification accuracy of three neural network classifiers on a satellite image-based pattern classification problem. The neural network classifiers used include two types of the Multi-Layer-Perceptron (MLP) and the Radial Basis Function Network. A normal (conventional) classifier is used as a benchmark to evaluate the performance of neural network classifiers. The satellite image consists of 2,460 pixels selected from a section (270 x 360) of a Landsat-5 TM scene from the city of Vienna and its northern surroundings. In addition to evaluation of classification accuracy, the neural classifiers are analysed for generalization capability and stability of results. Best overall results (in terms of accuracy and convergence time) are provided by the MLP-1 classifier with weight elimination. It has a small number of parameters and requires no problem-specific system of initial weight values. Its in-sample classification error is 7.87% and its out-of-sample classification error is 10.24% for the problem at hand. Four classes of simulations serve to illustrate the properties of the classifier in general and the stability of the result with respect to control parameters, and on the training time, the gradient descent control term, initial parameter conditions, and different training and testing sets. (authors' abstract)<br>Series: Discussion Papers of the Institute for Economic Geography and GIScience
APA, Harvard, Vancouver, ISO, and other styles
24

Staufer-Steinnocher, Petra, and Manfred M. Fischer. "A Neural Network Classifier for Spectral Pattern Recognition. On-Line versus Off-Line Backpropagation Training." WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/4152/1/WSG_DP_6097.pdf.

Full text
Abstract:
In this contributon we evaluate on-line and off-line techniques to train a single hidden layer neural network classifier with logistic hidden and softmax output transfer functions on a multispectral pixel-by-pixel classification problem. In contrast to current practice a multiple class cross-entropy error function has been chosen as the function to be minimized. The non-linear diffierential equations cannot be solved in closed form. To solve for a set of locally minimizing parameters we use the gradient descent technique for parameter updating based upon the backpropagation technique for evaluating the partial derivatives of the error function with respect to the parameter weights. Empirical evidence shows that on-line and epoch-based gradient descent backpropagation fail to converge within 100,000 iterations, due to the fixed step size. Batch gradient descent backpropagation training is superior in terms of learning speed and convergence behaviour. Stochastic epoch-based training tends to be slightly more effective than on-line and batch training in terms of generalization performance, especially when the number of training examples is larger. Moreover, it is less prone to fall into local minima than on-line and batch modes of operation. (authors' abstract)<br>Series: Discussion Papers of the Institute for Economic Geography and GIScience
APA, Harvard, Vancouver, ISO, and other styles
25

Guettari, Nadjib. "Évaluation du contenu d'une image couleur par mesure basée pixel et classification par la théorie des fonctions de croyance." Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2275/document.

Full text
Abstract:
De nos jours, il est devenu de plus en plus simple pour qui que ce soit de prendre des photos avec des appareils photo numériques, de télécharger ces images sur l'ordinateur et d'utiliser différents logiciels de traitement d'image pour appliquer des modification sur ces images (compression, débruitage, transmission, etc.). Cependant, ces traitements entraînent des dégradations qui influent sur la qualité visuelle de l'image. De plus, avec la généralisation de l'internet et la croissance de la messagerie électronique, des logiciels sophistiqués de retouche d'images se sont démocratisés permettant de falsifier des images à des fins légitimes ou malveillantes pour des communications confidentielles ou secrètes. Dans ce contexte, la stéganographie constitue une méthode de choix pour dissimuler et transmettre de l'information.Dans ce manuscrit, nous avons abordé deux problèmes : l'évaluation de la qualité d'image et la détection d'une modification ou la présence d'informations cachées dans une image. L'objectif dans un premier temps est de développer une mesure sans référence permettant d'évaluer de manière automatique la qualité d'une image en corrélation avec l'appréciation visuelle humaine. Ensuite proposer un outil de stéganalyse permettant de détecter, avec la meilleure fiabilité possible, la présence d'informations cachées dans des images naturelles. Dans le cadre de cette thèse, l'enjeu est de prendre en compte l'imperfection des données manipulées provenant de différentes sources d'information avec différents degrés de précision. Dans ce contexte, afin de profiter entièrement de l'ensemble de ces informations, nous proposons d'utiliser la théorie des fonctions de croyance. Cette théorie permet de représenter les connaissances d'une manière relativement naturelle sous la forme d'une structure de croyances. Nous avons proposé une nouvelle mesure sans référence d'évaluation de la qualité d'image capable d'estimer la qualité des images dégradées avec de multiple types de distorsion. Cette approche appelée wms-EVreg2 est basée sur la fusion de différentes caractéristiques statistiques, extraites de l'image, en fonction de la fiabilité de chaque ensemble de caractéristiques estimée à travers la matrice de confusion. À partir des différentes expérimentations, nous avons constaté que wms-EVreg2 présente une bonne corrélation avec les scores de qualité subjectifs et fournit des performances de prédiction de qualité compétitives par rapport aux mesures avec référence.Pour le deuxième problème abordé, nous avons proposé un schéma de stéganalyse basé sur la théorie des fonctions de croyance construit sur des sous-espaces aléatoires des caractéristiques. La performance de la méthode proposée a été évaluée sur différents algorithmes de dissimulation dans le domaine de transformé JPEG ainsi que dans le domaine spatial. Ces tests expérimentaux ont montré l'efficacité de la méthode proposée dans certains cadres d'applications. Cependant, il reste de nombreuses configurations qui résident indétectables<br>Nowadays it has become increasingly simpler for anyone to take pictures with digital cameras, to download these images to the computer and to use different image processing software to apply modifications on these images (Compression, denoising, transmission, etc.). However, these treatments lead to degradations which affect the visual quality of the image. In addition, with the widespread use of the Internet and the growth of electronic mail, sophisticated image-editing software has been democratised allowing to falsify images for legitimate or malicious purposes for confidential or secret communications. In this context, steganography is a method of choice for embedding and transmitting information.In this manuscript we discussed two issues : the image quality assessment and the detection of modification or the presence of hidden information in an image. The first objective is to develop a No-Reference measure allowing to automatically evaluate the quality of an image in correlation with the human visual appreciation. Then we propose a steganalysis scheme to detect, with the best possible reliability, the presence of information embedded in natural images. In this thesis, the challenge is to take into account the imperfection of the manipulated data coming from different sources of information with different degrees of precision. In this context, in order to take full advantage of all this information, we propose to use the theory of belief functions. This theory makes it possible to represent knowledge in a relatively natural way in the form of a belief structure.We proposed a No-reference image quality assessment measure, which is able to estimate the quality of the degraded images with multiple types of distortion. This approach, called wms-EVreg2, is based on the fusion of different statistical features, extracted from the image, depending on the reliability of each set of features estimated through the confusion matrix. From the various experiments, we found that wms-EVreg2 has a good correlation with subjective quality scores and provides competitive quality prediction performance compared to Full-reference image quality measures.For the second problem addressed, we proposed a steganalysis scheme based on the theory of belief functions constructed on random subspaces of the features. The performance of the proposed method was evaluated on different steganography algorithms in the JPEG transform domain as well as in the spatial domain. These experimental tests have shown the performance of the proposed method in some application frameworks. However, there are many configurations that reside undetectable
APA, Harvard, Vancouver, ISO, and other styles
26

Porter, Sarah Ann. "Land cover study in Iowa: analysis of classification methodology and its impact on scale, accuracy, and landscape metrics." Thesis, University of Iowa, 2011. https://ir.uiowa.edu/etd/1169.

Full text
Abstract:
For landscapes dominated by agriculture, land cover plays an important role in the balance between anthropogenic and natural forces. Therefore, the objective of this thesis is to describe two different methodologies that have been implemented to create high-resolution land cover classifications in a dominant agricultural landscape. First, an object-based segmentation approach will be presented, which was applied to historic, high resolution, panchromatic aerial photography. Second, a traditional per-pixel technique was applied to multi-temporal, multispectral, high resolution aerial photography, in combination with light detection and ranging (LIDAR) and independent component analysis (ICA). A critical analysis of each approach will be discussed in detail, as well as the ability of each methodology to generate landscape metrics that can accurately characterize the quality of the landscape. This will be done through the comparison of various landscape metrics derived from the different classifications approaches, with a goal of enhancing the literature concerning how these metrics vary across methodologies and across scales. This is a familiar problem encountered when analyzing land cover datasets over time, which are often at different scales or generated using different methodologies. The diversity of remotely sensed imagery, including varying spatial resolutions, landscapes, and extents, as well as the wide range of spatial metrics that can be created, has generated concern about the integrity of these metrics when used to make inferences about landscape quality. Finally, inferences will be made about land cover and land cover change dynamics for the state of Iowa based on insight gained throughout the process.
APA, Harvard, Vancouver, ISO, and other styles
27

Chrétien, Louis-Philippe. "Détection et dénombrement de la moyenne et grande faune par imagerie visible et infrarouge thermique acquise à l'aide d'un aéronef sans pilote (ASP)." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8598.

Full text
Abstract:
L’inventaire aérien est une approche pratique pour effectuer l’inventaire de la grande faune sur de grands territoires; particulièrement pour les zones peu accessibles. Toutefois, les limitations liées aux capacités de détection des observateurs, la coloration cryptique de certaines espèces fauniques et la complexité structurelle de certains habitats font en sorte que les inventaires ont généralement des biais qui sous-estiment la densité réelle de la population. Par ailleurs, peu d’études ont démontré la capacité d’effectuer la détection aérienne simultanée de plusieurs espèces. La détection multiespèce peut s’avérer utile pour les espèces qui se côtoient spatialement afin de connaître leur utilisation de l’espace, pour étudier la relation proie/prédateur et pour limiter les coûts à un seul inventaire. Cette pratique s’avère néanmoins trop exigeante pour les observateurs qui doivent déjà faire preuve de beaucoup de concentration pour détecter une seule espèce lors d’un inventaire aérien traditionnel. L’utilisation d’imagerie aérienne multispectrale acquise avec un aéronef sans pilote (ASP) représente une méthode potentielle pour la détection d’une ou plusieurs espèces fauniques. Ce projet de recherche consistait donc dans un premier temps à détecter, identifier et dénombrer à l’aide d’imagerie acquise avec un ASP et par traitements d’images les cerfs de Virginie (Odocoileus virginianus). Différentes combinaisons de bandes spectrales, méthodes d’analyses d’images et résolutions spatiales ont été testées pour déterminer la méthode la plus efficace pour la détection du cerf. Dans un deuxième temps, la meilleure méthode identifiée pour les cerfs a été utilisée et adaptée pour effectuer la détection simultanée des bisons d’Amérique (Bison bison), des daims européens (Dama dama), des loups gris (Canis lupus) et des wapitis (Cervus canadensis). L’inventaire de la faune a été réalisé au Centre d’observation de la faune et d’interprétation de l’agriculture de Falardeau à Saint-David-de-Falardeau, Québec, Canada. Les résultats démontrent que l’imagerie visible et infrarouge thermique avec une résolution spatiale de 0.8 cm/pixel combinée à une analyse d’images par objet constitue la combinaison la plus efficace parmi celles testées pour la détection des cerfs de Virginie. Tous les individus visibles à l’œil nu sur les mosaïques ont été détectés. Néanmoins, considérant l’obstruction visuelle causée par la canopée coniférienne, cette approche offre un taux de détectabilité moyen de 0.5, comparable aux inventaires aériens classiques. La complexité structurelle de l’habitat demeure ainsi un problème non résolu. Quant à l’analyse multiespèce, les bisons et les wapitis ont tous été détectés même en présence d’autres espèces comme l’autruche (Struthio camelus), le coyote (Canis latrans) et l’ours noir (Ursus americanus). Pour les daims et les loups, entre 0 à 1 individu par parcelle a été confondu avec les autres éléments du paysage tels que le sol. De plus, entre 0 à 2 individus par parcelle n’ont pas été détectés alors qu’ils étaient présents dans la ligne de vol. Non seulement cette approche a démontré sa capacité à détecter une ou plusieurs espèces, mais également son adaptabilité à cibler spécifiquement les espèces d’intérêts pour le gestionnaire et à ignorer celles qui ne sont pas ciblées. Ce projet a donc permis de valider le potentiel des ASP pour l’acquisition d’imagerie d’une qualité permettant l’extraction de données d’inventaires. Cela ouvre la voie à l’utilisation de ce type de plateforme d’acquisition pour des applications reliées à la gestion de la faune grâce à leur faible impact sonore et leur haut taux de revisite. Toutefois, la réglementation canadienne actuelle limite l’utilisation de ces appareils sur de faibles superficies. Il n’en demeure pas moins que la technologie peut être développée en attendant les futurs progrès du domaine des ASP et de la réglementation.
APA, Harvard, Vancouver, ISO, and other styles
28

Meléndez, Rodríguez Jaime Christian. "Supervised and unsupervised segmentation of textured images by efficient multi-level pattern classification." Doctoral thesis, Universitat Rovira i Virgili, 2010. http://hdl.handle.net/10803/8487.

Full text
Abstract:
This thesis proposes new, efficient methodologies for supervised and unsupervised image segmentation based on texture information. For the supervised case, a technique for pixel classification based on a multi-level strategy that iteratively refines the resulting segmentation is proposed. This strategy utilizes pattern recognition methods based on prototypes (determined by clustering algorithms) and support vector machines. In order to obtain the best performance, an algorithm for automatic parameter selection and methods to reduce the computational cost associated with the segmentation process are also included. For the unsupervised case, the previous methodology is adapted by means of an initial pattern discovery stage, which allows transforming the original unsupervised problem into a supervised one. Several sets of experiments considering a wide variety of images are carried out in order to validate the developed techniques.<br>Esta tesis propone metodologías nuevas y eficientes para segmentar imágenes a partir de información de textura en entornos supervisados y no supervisados. Para el caso supervisado, se propone una técnica basada en una estrategia de clasificación de píxeles multinivel que refina la segmentación resultante de forma iterativa. Dicha estrategia utiliza métodos de reconocimiento de patrones basados en prototipos (determinados mediante algoritmos de agrupamiento) y máquinas de vectores de soporte. Con el objetivo de obtener el mejor rendimiento, se incluyen además un algoritmo para selección automática de parámetros y métodos para reducir el coste computacional asociado al proceso de segmentación. Para el caso no supervisado, se propone una adaptación de la metodología anterior mediante una etapa inicial de descubrimiento de patrones que permite transformar el problema no supervisado en supervisado. Las técnicas desarrolladas en esta tesis se validan mediante diversos experimentos considerando una gran variedad de imágenes.
APA, Harvard, Vancouver, ISO, and other styles
29

Fabre, Sophie. "Apport de l'information contextuelle à la fusion multicapteurs : application à la fusion pixel." Toulouse, ENSAE, 1999. http://www.theses.fr/1999ESAE0013.

Full text
Abstract:
Deux méthodes théoriques générales de classification utilisant la fusion d'images multispectrales et intégrant l'information sur la validité des senseurs sont mises en place. Ces méthodes générales sont appliquées pour réaliser une classification au niveau du pixel et elles sont alors mises en œuvre sur des données physiques. La théorie de Dempster-Shafer, compte tenu de ses avantages inhérents, est utilisée pour intégrer une information supplémentaire sur les conditions d'acquisition des mesures par les senseurs. Cette information renseigne sur la fiabilité à accorder aux capteurs est appelée information contextuelle. Le formalisme de modélisation de l'information contextuelle est basé sur l'évaluation de domaines flous de validité, soit de chaque source prise isolément, soit de toutes les associations de sources considérées concurrentiellement. Ainsi deux méthodes de fusion et de modélisation de l'information contextuelle sont proposées. Lorsque l'apprentissage au préalable n'est pas disponible pour un objet de la scène, ces méthodes évoluent et une méthode globale est déduite. Les méthodes mises en place sont appliquées sur des cas typiques pour comparer les résultats de classification obtenus avec ceux issus d'une approche probabiliste classique sans prise en compte de la validité des capteurs. De plus, les méthodes sont appliquées pour réaliser une classification au niveau du pixel et sont mises en œuvre sur des données physiques. Seul un paramètre perturbateur atmosphérique, la vapeur d'eau, est introduit pour évaluer la fiabilité des mesures acquises par les capteurs. Les données utilisées sont obtenues à partir de bases de données (données météorologiques, mesures de réflectances spectrales). Ces simulations de cas typiques et cette application à la classification pixel permettent de montrer que l'information contextuelle est pertinente et correctement intégrée et que les performances de classification sont améliorées.
APA, Harvard, Vancouver, ISO, and other styles
30

Joseph, Katherine Amanda. "Comparison of Segment and Pixel Based Non-Parametric Classification of Land Cover in the Amazon Region of Brazil Using Multitemporal Landsat TM/ETM+ Imagery." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32802.

Full text
Abstract:
This study evaluated the ability of segment-based classification paired with non-parametric methods (CART and kNN) to classify a chronosequence of Landsat TM/ETM+ imagery spanning from 1992 to 2002 within the state of Rondônia, Brazil. Pixel-based classification was also implemented for comparison. Interannual multitemporal composites were used in each classification in an attempt to increase the separation of primary forest, cleared, and re-vegetated classes within a given year. The kNN and CART classification methods, with the integration of multitemporal data, performed equally well with overall accuracies ranging from 77% to 91%. Pixel-based CART classification, although not different in terms of mean or median overall accuracy, did have significantly lower variability than all other techniques (3.2% vs. an average of 13.2%), and thus provided more consistent results. Segmentation did not improve classification success over pixel-based methods and was therefore an unnecessary processing step with the used dataset. Through the appropriate band selection methods of the respective non-parametric classifiers, multitemporal bands were chosen in 38 of the 44 total classifications, strongly suggesting the utility of interannual multitemporal data for the separation of cleared, re-vegetated, and primary forest classes. The separation of the primary forest class from the cleared and re-vegetated classes was particularly successful and may be a possible result of the incorporation of multitemporal data. The land cover maps from this study allow for an accurate annualized analysis of land cover and can be coupled with household data to gain a better understanding of landscape change in the region.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Sajid, Hasan. "A Universal Background Subtraction System." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/47.

Full text
Abstract:
Background Subtraction is one of the fundamental pre-processing steps in video processing. It helps to distinguish between foreground and background for any given image and thus has numerous applications including security, privacy, surveillance and traffic monitoring to name a few. Unfortunately, no single algorithm exists that can handle various challenges associated with background subtraction such as illumination changes, dynamic background, camera jitter etc. In this work, we propose a Multiple Background Model based Background Subtraction (MB2S) system, which is universal in nature and is robust against real life challenges associated with background subtraction. It creates multiple background models of the scene followed by both pixel and frame based binary classification on both RGB and YCbCr color spaces. The masks generated after processing these input images are then combined in a framework to classify background and foreground pixels. Comprehensive evaluation of proposed approach on publicly available test sequences show superiority of our system over other state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
32

Grift, Jeroen. "Forest Change Mapping in Southwestern Madagascar using Landsat-5 TM Imagery, 1990 –2010." Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-22606.

Full text
Abstract:
The main goal of this study was to map and measure forest change in the southwestern part of Madagascar near the city of Toliara in the period 1990-2010. Recent studies show that forest change in Madagascar on a regional scale does not only deal with forest loss, but also with forest growth However, it is unclear how the study area is dealing with these patterns. In order to select the right classification method, pixel-based classification was compared with object-based classification. The results of this study shows that the object-based classification method was the most suitable method for this landscape. However, the pixel-based approaches also resulted in accurate results. Furthermore, the study shows that in the period 1990–2010, 42% of the forest cover disappeared and was converted into bare soil and savannahs. Next to the change in forest, stable forest regions were fragmented. This has negative effects on the amount of suitable habitats for Malagasy fauna. Finally, the scaling structure in landscape patches was investigated. The study shows that the patch size distribution has long-tail properties and that these properties do not change in periods of deforestation.
APA, Harvard, Vancouver, ISO, and other styles
33

Zahradnik, Roman. "Texturní příznaky." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-236898.

Full text
Abstract:
Aim of this project is to evaluate effectivity of various texture features within the context of image processing, particulary the task of texture recognition and classification. My work focuses on comparing and discussion of usage and efficiency of texture features based on local binary patterns and co- ccurence matrices. As classification algorithm is concerned, cluster analysis was choosen.
APA, Harvard, Vancouver, ISO, and other styles
34

Yokum, Hannah Elizabeth. "Understanding Community and Ecophysiology of Plant Species on the Colorado Plateau." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/7211.

Full text
Abstract:
The intensification of aridity due to anthropogenic climate change is likely to have a large impact on the growth and survival of plant species in the southwestern U.S. where species are already vulnerable to high temperatures and limited precipitation. Global climate change impacts plants through a rising temperature effect, CO2 effect, and land management. In order to forecast the impacts of global climate change, it is necessary to know the current conditions and create a baseline for future comparisons and to understand the factors and players that will affect what happens in the future. The objective of Chapter 1 is to create the very first high resolution, accurate, park-wide map that shows the distribution of dominant plants on the Colorado Plateau and serves as a baseline for future comparisons of species distribution. If we are going to forecast what species have already been impacted by global change or will likely be impacted in the future, we need to know their physiology. Chapter 2 surveys the physiology of the twelve most abundant non-tree species on the Colorado Plateau to help us forecast what climate change might do and to understand what has likely already occurred. Chapter 1. Our objective was to create an accurate species-level classification map using a combination of multispectral data from the World View-3 satellite and hyperspectral data from a handheld radiometer to compare pixel-based and object-based classification. We found that overall, both methods were successful in creating an accurate landscape map. Different functional types could be classified with fairly good accuracy in a pixel-based classification but to get more accurate species-level classification, object-based methods were more effective (0.915, kappa coefficient=0.905) than pixel-based classification (0.79, kappa coefficient=0.766). Although spectral reflectance values were important in classification, the addition of other features such as brightness, texture, number of pixels, size, shape, compactness, and asymmetry improved classification accuracy.Chapter 2. We sought to understand if patterns of gas exchange to changes in temperature and CO2 can explain why C3 shrubs are increasing, and C3 and C4 grasses are decreasing in the southwestern U.S. We conducted seasonal, leaf-level gas exchange surveys, and measured temperature response curves and A-Ci response curves of common shrub, forb, and grass species in perennial grassland ecosystems over the year. We found that the functional trait of being evergreen is increasingly more successful in climate changing conditions with warmer winter months. Grass species in our study did not differentiate by photosynthetic pathway; they were physiologically the same in all of our measurements. Increasing shrub species, Ephedra viridis and Coleogyne ramosissima displayed functional similarities in response to increasing temperature and CO2.
APA, Harvard, Vancouver, ISO, and other styles
35

El-Ossta, Esam E. A. "Automated dust storm detection using satellite images. Development of a computer system for the detection of dust storms from MODIS satellite images and the creation of a new dust storm database." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/5760.

Full text
Abstract:
Dust storms are one of the natural hazards, which have increased in frequency in the recent years over Sahara desert, Australia, the Arabian Desert, Turkmenistan and northern China, which have worsened during the last decade. Dust storms increase air pollution, impact on urban areas and farms as well as affecting ground and air traffic. They cause damage to human health, reduce the temperature, cause damage to communication facilities, reduce visibility which delays both road and air traffic and impact on both urban and rural areas. Thus, it is important to know the causation, movement and radiation effects of dust storms. The monitoring and forecasting of dust storms is increasing in order to help governments reduce the negative impact of these storms. Satellite remote sensing is the most common method but its use over sandy ground is still limited as the two share similar characteristics. However, satellite remote sensing using true-colour images or estimates of aerosol optical thickness (AOT) and algorithms such as the deep blue algorithm have limitations for identifying dust storms. Many researchers have studied the detection of dust storms during daytime in a number of different regions of the world including China, Australia, America, and North Africa using a variety of satellite data but fewer studies have focused on detecting dust storms at night. The key elements of this present study are to use data from the Moderate Resolution Imaging Spectroradiometers on the Terra and Aqua satellites to develop more effective automated method for detecting dust storms during both day and night and generate a MODIS dust storm database.<br>Libyan Centre for Remote Sensing and Space Science<br>Appendix A was submitted with extra data files which are not available online.
APA, Harvard, Vancouver, ISO, and other styles
36

Gürtler, Johannes, Felix Greiffenhagen, Jakob Woisetschläger, Daniel Haufe, and Jürgen Czarske. "Non-invasive seedingless measurements of the flame transfer function using high-speed camerabased laser vibrometry." SPIE, 2017. https://tud.qucosa.de/id/qucosa%3A34893.

Full text
Abstract:
The characterization of modern jet engines or stationary gas turbines running with lean combustion by means of swirl-stabilized ames necessitates seedingless optical field measurements of the ame transfer function, i.e. the ratio of the uctuating heat release rate inside the ame volume, the instationary ow velocity at the combustor outlet and the time average of both quantities. For this reason, a high-speed camera-based laser interferometric vibrometer is proposed for spatio-temporally resolved measurements of the ame transfer function inside a swirl-stabilized technically premixed ame. Each pixel provides line-of-sight measurements of the heat release rate due to the linear coupling to uctuations of the refractive index along the laser beam, which are based on density uctuations inside the ame volume. Additionally, field measurements of the instationary ow velocity are possible due to correlation of simultaneously measured pixel signals and the known distance between the measurement positions. Thus, the new system enables the spatially resolved detection of the ame transfer function and instationary ow behavior with a single measurement for the first time. The presented setup offers single pixel resolution with measurement rates up to 40 kHz at an maximum image resolution of 256 px x 128 px. Based on a comparison with reference measurements using a standard pointwise laser interferometric vibrometer, the new system is validated and a discussion of the measurement uncertainty is presented. Finally, the measurement of refractive index uctuations inside a ame volume is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
37

El-Ossta, Esam Elmehde Amar. "Automated dust storm detection using satellite images : development of a computer system for the detection of dust storms from MODIS satellite images and the creation of a new dust storm database." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/5760.

Full text
Abstract:
Dust storms are one of the natural hazards, which have increased in frequency in the recent years over Sahara desert, Australia, the Arabian Desert, Turkmenistan and northern China, which have worsened during the last decade. Dust storms increase air pollution, impact on urban areas and farms as well as affecting ground and air traffic. They cause damage to human health, reduce the temperature, cause damage to communication facilities, reduce visibility which delays both road and air traffic and impact on both urban and rural areas. Thus, it is important to know the causation, movement and radiation effects of dust storms. The monitoring and forecasting of dust storms is increasing in order to help governments reduce the negative impact of these storms. Satellite remote sensing is the most common method but its use over sandy ground is still limited as the two share similar characteristics. However, satellite remote sensing using true-colour images or estimates of aerosol optical thickness (AOT) and algorithms such as the deep blue algorithm have limitations for identifying dust storms. Many researchers have studied the detection of dust storms during daytime in a number of different regions of the world including China, Australia, America, and North Africa using a variety of satellite data but fewer studies have focused on detecting dust storms at night. The key elements of this present study are to use data from the Moderate Resolution Imaging Spectroradiometers on the Terra and Aqua satellites to develop more effective automated method for detecting dust storms during both day and night and generate a MODIS dust storm database.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Cheng. "A General System for Supervised Biomedical Image Segmentation." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/214.

Full text
Abstract:
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before used in a different application. We describe a system that, with few modifications, can be used in a variety of image segmentation problems. The system is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. In summary, we have several innovations: (1) A general framework for such a system is proposed, where rotations and variations of intensity neighborhoods in scales are modeled, and a multi-scale classification framework is utilized to segment unknown images; (2) A fast algorithm for training data selection and pixel classification is presented, where a majority voting based criterion is proposed for selecting a small subset from raw training set. When combined with 1-nearest neighbor (1-NN) classifier, such an algorithm is able to provide descent classification accuracy within reasonable computational complexity. (3) A general deformable model for optimization of segmented regions is proposed, which takes the decision values from previous pixel classification process as input, and optimize the segmented regions in a partial differential equation (PDE) framework. We show that the performance of this system in several different biomedical applications, such as tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar or better than several algorithms specifically designed for each of these applications. In addition, we describe another general segmentation system for biomedical applications where a strong prior on shape is available (e.g. cells, nuclei). The idea is based on template matching and supervised learning, and we show the examples of segmenting cells and nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given data set to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting cells and nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered cells and nuclei.
APA, Harvard, Vancouver, ISO, and other styles
39

Sällqvist, Jessica. "Real-time 3D Semantic Segmentation of Timber Loads with Convolutional Neural Networks." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148862.

Full text
Abstract:
Volume measurements of timber loads is done in conjunction with timber trade. When dealing with goods of major economic values such as these, it is important to achieve an impartial and fair assessment when determining price-based volumes. With the help of Saab’s missile targeting technology, CIND AB develops products for digital volume measurement of timber loads. Currently there is a system in operation that automatically reconstructs timber trucks in motion to create measurable images of them. Future iterations of the system is expected to fully automate the scaling by generating a volumetric representation of the timber and calculate its external gross volume. The first challenge towards this development is to separate the timber load from the truck. This thesis aims to evaluate and implement appropriate method for semantic pixel-wise segmentation of timber loads in real time. Image segmentation is a classic but difficult problem in computer vision. To achieve greater robustness, it is therefore important to carefully study and make use of the conditions given by the existing system. Variations in timber type, truck type and packing together create unique combinations that the system must be able to handle. The system must work around the clock in different weather conditions while maintaining high precision and performance.
APA, Harvard, Vancouver, ISO, and other styles
40

Kaszta, Zaneta. "Using remotely-sensed habitat data to model space use and disease transmission risk between wild and domestic herbivores in the African savanna." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/253820.

Full text
Abstract:
The interface between protected and communal lands presents certain challenges for wildlife conservation and the sustainability of local livelihoods. This is a particular case in South Africa, where foot-and-mouth disease (FMD), mainly carried by African buffalo (Syncerus caffer) is transmitted to cattle despite a fence surrounding the protected areas.The ultimate objective of this thesis was to improve knowledge of FMD transmission risk by analyzing behavioral patterns of African buffalo and cattle near the Kruger National Park, and by modelling at fine spatial scale the seasonal risk of contact between them. Since vegetation is considered as a primary bottom-up regulator of grazers distribution, I developed fine-scale seasonal mapping of vegetation. With that purpose, I explored the utility of WorldView-2 (WV-2) sensor, comparing object- (OBIA) and pixel-based image classification methods, and various traditional and advanced classification algorithms. All tested methods produced relatively high accuracy results (>77%), however OBIA with random forest and support vector machines performed significantly better, particularly for wet season imagery (93%).In order to investigate the buffalo and cattle seasonal home ranges and resource utilization distributions I combined the telemetry data with fine-scale maps on forage (vegetation components, and forage quality and quantity). I found that buffalo behaved more like bulk feeders at the scale of home ranges but were more selective within their home range, preferring quality forage over quantity. In contrast, cattle selected forage with higher quantity and quality during the dry season but behaved like bulk grazers in the wet season.Based on the resource utilization models, I generated seasonal cost (resistance) surfaces of buffalo and cattle movement through the landscape considering various scenarios. These surfaces were used to predict buffalo and cattle dispersal routes by applying a cumulative resistant kernels method. The final seasonal contact risks maps were developed by intersecting the cumulative resistant kernels layers of both species and by averaging all scenarios. The maps revealed important seasonal differences in the contact risk, with higher risk in the dry season and hotspots along a main river and the weakest parts of the fence. Results of this study can guide local decision makers in the allocation of resources for FMD mitigation efforts and provide guidelines to minimize overgrazing.<br>Doctorat en Sciences<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
41

Meyer, Thomas, Tom Venus, Holger Sieg, et al. "Simultaneous Quantification and Visualization of Titanium Dioxide Nanomaterial Uptake at the Single Cell Level in an In Vitro Model of the Human Small Intestine." Wiley, 2019. https://ul.qucosa.de/id/qucosa%3A70804.

Full text
Abstract:
Useful properties render titanium dioxide nanomaterials (NMs) to be one of the most commonly used NMs worldwide. TiO2 powder is used as food additives (E171), which may contain up to 36% nanoparticles. Consequently, humans could be exposed to comparatively high amounts of NMs that may induce adverse effects of chronic exposure conditions. Visualization and quantification of cellular NM uptake as well as their interactions with biomolecules within cells are key issues regarding risk assessment. Advanced quantitative imaging tools for NM detection within biological environments are therefore required. A combination of the label-free spatially resolved dosimetric tools, microresolved particle induced X-ray emission and Rutherford backscattering, together with high resolution imaging techniques, such as time-of-flight secondary ion mass spectrometry and transmission electron microscopy, are applied to visualize the cellular translocation pattern of TiO2 NMs and to quantify the NM-load, cellular major, and trace elements in differentiated Caco-2 cells as a function of their surface properties at the single cell level. Internalized NMs are not only able to impair the cellular homeostasis by themselves, but also to induce an intracellular redistribution of metabolically relevant elements such as phosphorus, sulfur, iron, and copper.
APA, Harvard, Vancouver, ISO, and other styles
42

Wachs, Marina-Elena, Theresa Scholl, Gesa Balbig, and Katharina Grobheiser. "Textile Engineering ›SurFace‹: Oberflächenentwurf von der taktilen zur grafischen zur taktilen Erfahrbarkeit im Design Engineering der Zukunft." Thelem Universitätsverlag & Buchhandlung GmbH & Co. KG, 2021. https://tud.qucosa.de/id/qucosa%3A75865.

Full text
Abstract:
Das Textile-Engineering steht innerhalb der Digitalisierungsphase der vierten industriellen Revolution, vor der Herausforderung, die taktile Erfahrbarkeit von physischen Oberflächen in digitale tools zu übersetzen. Hierbei stehen scheinbar analoge Entwurfsmethoden des Skizzierens, mit dem Duktus im Konflikt mit den digitalen Entwurfsflächen und -räumen. Wie können wir digitale Materialbibliotheken so verwenden, dass diese der „wahren“ Oberflächen(-Ästhetik), entsprechend unseren physisch erlebbaren Welten entsprechen? Wir entwickeln die interaktiven Entwurfsräume der Zukunft „sur face“, über das „Gesicht“ des Materials. Mittels Matrix und digitalem Duktus und im vis à vis von analogen und digitalen vernetzt designen, kommen wir der Anforderung von human centred design der textilen Zukunftswelten näher.
APA, Harvard, Vancouver, ISO, and other styles
43

Barapatre, Nirav. "Application of Ion Beam Methods in Biomedical Research." Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-126262.

Full text
Abstract:
The methods of analysis with a focused ion beam, commonly termed as nuclear microscopy, include quantitative physical processes like PIXE and RBS. The element concentrations in a sample can be quantitatively mapped with a sub-micron spatial resolution and a sub-ppm sensitivity. Its fully quantitative and non-destructive nature makes it particularly suitable for analysing biological samples. The applications in biomedical research are manifold. The iron overload hypothesis in Parkinson\\\'s disease is investigated by a differential analysis of human substantia nigra. The trace element content is quantified in neuromelanin, in microglia cells, and in extraneuronal environment. A comparison of six Parkinsonian cases with six control cases revealed no significant elevation in iron level bound to neuromelanin. In fact, a decrease in the Fe/S ratio of Parkinsonian neuromelanin was measured, suggesting a modification in its iron binding properties. Drosophila melanogaster, or the fruit fly, is a widely used model organism in neurobiological experiments. The electrolyte elements are quantified in various organs associated with the olfactory signalling, namely the brain, the antenna and its sensilla hairs, the mouth parts, and the compound eye. The determination of spatially resolved element concentrations is useful in preparing the organ specific Ringer\\\'s solution, an artificial lymph that is used in disruptive neurobiological experiments. The role of trace elements in the progression of atherosclerosis is examined in a pilot study. A differential quantification of the element content in an induced murine atherosclerotic lesion reveals elevated S and Ca levels in the artery wall adjacent to the lesion and an increase in iron in the lesion. The 3D quantitative distribution of elements is reconstructed by means of stacking the 2D quantitative maps of consecutive sections of an artery. The feasibility of generating a quantitative elemental rodent brain atlas by Large Area Mapping is investigated by measuring at high beam currents. A whole coronal section of the rat brain was measured in segments in 14 h. Individual quantitative maps of the segments are pieced together to reconstruct a high-definition element distribution map of the whole section with a subcellular spatial resolution. The use of immunohistochemical staining enhanced with single elements helps in determining the cell specific element content. Its concurrent use with Large Area Mapping can give cellular element distribution maps.
APA, Harvard, Vancouver, ISO, and other styles
44

Djikic, Addi. "Segmentation and Depth Estimation of Urban Road Using Monocular Camera and Convolutional Neural Networks." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235496.

Full text
Abstract:
Deep learning for safe autonomous transport is rapidly emerging. Fast and robust perception for autonomous vehicles will be crucial for future navigation in urban areas with high traffic and human interplay. Previous work focuses on extracting full image depth maps, or finding specific road features such as lanes. However, in urban environments lanes are not always present, and sensors such as LiDAR with 3D point clouds provide a quite sparse depth perception of road with demanding algorithmic approaches. In this thesis we derive a novel convolutional neural network that we call AutoNet. It is designed as an encoder-decoder network for pixel-wise depth estimation of an urban drivable free-space road, using only a monocular camera, and handled as a supervised regression problem. AutoNet is also constructed as a classification network to solely classify and segment the drivable free-space in real- time with monocular vision, handled as a supervised classification problem, which shows to be a simpler and more robust solution than the regression approach. We also implement the state of the art neural network ENet for comparison, which is designed for fast real-time semantic segmentation and fast inference speed. The evaluation shows that AutoNet outperforms ENet for every performance metrics, but shows to be slower in terms of frame rate. However, optimization techniques are proposed for future work, on how to advance the frame rate of the network while still maintaining the robustness and performance. All the training and evaluation is done on the Cityscapes dataset. New ground truth labels for road depth perception are created for training with a novel approach of fusing pre-computed depth maps with semantic labels. Data collection with a Scania vehicle is conducted, mounted with a monocular camera to test the final derived models. The proposed AutoNet shows promising state of the art performance in regards to road depth estimation as well as road classification.<br>Deep learning för säkra autonoma transportsystem framträder mer och mer inom forskning och utveckling. Snabb och robust uppfattning om miljön för autonoma fordon kommer att vara avgörande för framtida navigering inom stadsområden med stor trafiksampel. I denna avhandling härleder vi en ny form av ett neuralt nätverk som vi kallar AutoNet. Där nätverket är designat som en autoencoder för pixelvis djupskattning av den fria körbara vägytan för stadsområden, där nätverket endast använder sig av en monokulär kamera och dess bilder. Det föreslagna nätverket för djupskattning hanteras som ett regressions problem. AutoNet är även konstruerad som ett klassificeringsnätverk som endast ska klassificera och segmentera den körbara vägytan i realtid med monokulärt seende. Där detta är hanterat som ett övervakande klassificerings problem, som även visar sig vara en mer simpel och mer robust lösning för att hitta vägyta i stadsområden. Vi implementerar även ett av de främsta neurala nätverken ENet för jämförelse. ENet är utformat för snabb semantisk segmentering i realtid, med hög prediktions- hastighet. Evalueringen av nätverken visar att AutoNet utklassar ENet i varje prestandamätning för noggrannhet, men visar sig vara långsammare med avseende på antal bilder per sekund. Olika optimeringslösningar föreslås för framtida arbete, för hur man ökar nätverk-modelens bildhastighet samtidigt som man behåller robustheten.All träning och utvärdering görs på Cityscapes dataset. Ny data för träning samt evaluering för djupskattningen för väg skapas med ett nytt tillvägagångssätt, genom att kombinera förberäknade djupkartor med semantiska etiketter för väg. Datainsamling med ett Scania-fordon utförs även, monterad med en monoculär kamera för att testa den slutgiltiga härleda modellen. Det föreslagna nätverket AutoNet visar sig vara en lovande topp-presterande modell i fråga om djupuppskattning för väg samt vägklassificering för stadsområden.
APA, Harvard, Vancouver, ISO, and other styles
45

Ye, Ju-Yi, and 葉居易. "Construction of Multi-Class Classification Model on Defective Pixels on TFT-LCD Panels Based on Automatic Optical Inspection." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ucjuxw.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chiu, Hung-Chih, and 邱泓智. "Construction of Multi-Category Classification Model for Identifying Defective Pixels on TFT-LCD Panels Using Deep Learning Approach." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/4v73kv.

Full text
Abstract:
碩士<br>國立交通大學<br>工業工程與管理系所<br>107<br>Amount and position of defects on TFT-LCD panel affects the level of the panel, adding Automatic optical inspection(AOI) into the light-on test, which can automatically detect defects from the image of LCD panel. This study constructed a model based on Convolutional neural network of deep learning, which can classify various kind of defects at once. Without any pre-processing, the model can automatically capture features through a large number of data and model training, resulting in a high-efficiency and high-accuracy defects classification model. This study used actual panel images from Taiwan's leading computer hardware manufacturers for model construction, model testing and validating the result. After validation, the model constructed by this study has 99.9% model accuracy and excellent specificity and sensitivity, the model can also finish the process of classifying a TFT-LCD panel in only 467 seconds.
APA, Harvard, Vancouver, ISO, and other styles
47

Uttam, Kumar *. "Algorithms For Geospatial Analysis Using Multi-Resolution Remote Sensing Data." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2280.

Full text
Abstract:
Geospatial analysis involves application of statistical methods, algorithms and information retrieval techniques to geospatial data. It incorporates time into spatial databases and facilitates investigation of land cover (LC) dynamics through data, model, and analytics. LC dynamics induced by human and natural processes play a major role in global as well as regional scale patterns, which in turn influence weather and climate. Hence, understanding LC dynamics at the local / regional as well as at global levels is essential to evolve appropriate management strategies to mitigate the impacts of LC changes. This can be captured through the multi-resolution remote sensing (RS) data. However, with the advancements in sensor technologies, suitable algorithms and techniques are required for optimal integration of information from multi-resolution sensors which are cost effective while overcoming the possible data and methodological constraints. In this work, several per-pixel traditional and advanced classification techniques have been evaluated with the multi-resolution data along with the role of ancillary geographical data on the performance of classifiers. Techniques for linear and non-linear un-mixing, endmember variability and determination of spatial distribution of class components within a pixel have been applied and validated on multi-resolution data. Endmember estimation method is proposed and its performance is compared with manual, semi-automatic and fully automatic methods of endmember extraction. A novel technique - Hybrid Bayesian Classifier is developed for per pixel classification where the class prior probabilities are determined by un-mixing a low spatial-high spectral resolution multi-spectral data while posterior probabilities are determined from the training data obtained from ground, that are assigned to every pixel in a high spatial-low spectral resolution multi-spectral data in Bayesian classification. These techniques have been validated with multi-resolution data for various landscapes with varying altitudes. As a case study, spatial metrics and cellular automata based models applied for rapidly urbanising landscape with moderate altitude has been carried out.
APA, Harvard, Vancouver, ISO, and other styles
48

Hung, Je-Shiung, and 洪哲雄. "Fractal-modeled image compression based on shade-pixel-percentage classification." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/30504276433256249408.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>工程技術研究所<br>83<br>Fractal-based image compression has received extensive att- ention in recent years. The most popular fractal compression schemes are derived from Jacquin's approach, where the matching between range blocks and afine-transformed domain blocks has to be made. The matching is highly time-consuming. Classification is one of the ways in speeding up the range/domin block match- ing process. In this thesis, we propose a classification scheme based on the percentage of shade pixels in a block. Through the verific- ation by the full search in the complete domain pool, it is fo- und that in most cases, the matched range block and domain pool belong to the same class in our classification. Realizing this, we propose to search the matching domain block to a range block only in the domain subpool that has the same level of percenta- ge of shade pixels as it. The result is a significant reduction in the encoding time at the expense of a minor degradation in the quality of the compressed image.
APA, Harvard, Vancouver, ISO, and other styles
49

Κωστόπουλος, Σπυρίδων. "Development of supervised and unsupervised pixel-based classification methods for medical image segmentation." Thesis, 2009. http://nemertes.lis.upatras.gr/jspui/handle/10889/1877.

Full text
Abstract:
Breast cancer is among the well-researched type compared to other common types of cancer. However, there still remain important open issues for investigation. One of these issues is the clarification of the importance of certain biological factors, such as histological tumour grade and estrogens reception (ER) status, to clinical management of the disease. Until now, histological grading and ER status assessment is based on the visual evaluation of breast tissue specimens under the microscope. More specifically, grading is determined on the visual estimation of certain histological features, on H&E (Hematoxylin & Eosin) stained specimens according to the World Health Organization (WHO) guidelines, whereas ER-status is assessed as the percentage of expressed nuclei on immunohistochemically stained (IHC) specimens as suggested by the American Society of Clinical Oncology (ASCO) protocol. Recent studies have attempted to examine whether histological tumour grade relates to ER status. Such a relation seems to be of importance in the various treatment strategies followed in breast tumours. However, the quantification of ER status presents certain weaknesses: a) there is a lack of consensus among experts regarding the protocol to be followed for calculating the ER status; b) an exact estimate of the ER status is difficult to be obtained, since the latter would require manual counting of positively expressed nuclei. In clinical practice often a gross estimate is obtained by the histopathologists through visual inspection on representative specimen areas. Consequently, the evaluation of ER status, which has been considered by previous studies as the key measure for assessing the correlation between ERs and tumour grade, is prone to the physician’s subjective estimation. Therefore, more reliable methods are needed. This thesis has been carried out in the search of such alternative, more reliable, methods. Accordingly, the aims of the present thesis are: (i) to develop a reliable segmentation methodology for detection of ER-expressed nuclei in breast cancer tissue images stained with IHC, (ii) to objectively quantify ER status in breast cancer tissue images stained with IHC, (iii) to investigate potential correlation between ER status and histological grade by combining information from IHC and H&E stained breast cancer tissue images obtained from the same patient, (iv) to establish evidence for linking chromatin texture variations with textural variations on ER-expressed nuclei, (v) to investigate the potential of the proposed hybrid supervised pattern recognition strategies to other challenging fields of medical image processing and analysis. To address the above issues and in search of reliable methods for quantitatively assessing ER status and its correlation with histological grade based, a novel hybrid (unsupervised-supervised) pattern recognition methodology has been designed, developed and implemented for the analysis of breast cancer tissue images. Moreover, it will be shown that proper modification of the proposed methodology may result to generalize pixel classification approach suitable for processing and analysis of medical images other than microscopic such as Computed Tomography Angiography images.<br>Σε σχέση με άλλες μορφές καρκίνου, ο καρκίνος του μαστού είναι μεταξύ των ευρέως μελετημένων τύπων καρκίνου, ωστόσο, υπάρχουν ακόμη σημαντικά ανοικτά ζητήματα προς διερεύνηση. Ένα από αυτά τα είναι ο προσδιορισμός της σπουδαιότητας ορισμένων βιολογικών παραγόντων, όπως ο βαθμός διαφοροποίησης της κακοήθειας (ΒΔΚ) του όγκου και το επίπεδο έκφρασης των Οιστρογονικών Υποδοχέων (ΟΥ), στην κλινική διαχείριση της νόσου. Μέχρι τώρα, η εκτίμηση του ΒΔΚ του όγκου και της έκφρασης των ΟΥ είναι βασισμένη στην οπτική αξιολόγηση ιστολογικών δειγμάτων, τα οποία λαμβάνονται από αντιπροσωπευτικές περιοχές του μαστού, στο μικροσκόπιο. Συγκεκριμένα, σύμφωνα με τις οδηγίες του Παγκόσμιου Οργανισμού Υγείας, ο ΒΔΚ του όγκου καθορίζεται από την οπτική εκτίμηση ορισμένων ιστολογικών χαρακτηριστικών γνωρισμάτων σε ιστολογικά δείγματα που έχουν υποστεί χρώση Αιματοξυλίνης - Ηωσίνης (Heamatoxylin & Eosin-Η&Ε), ενώ σύμφωνα με τις οδηγίες της Αμερικάνικης Εταιρείας Κλινικής Ογκολογίας, η έκφραση των ΟΥ πρέπει να εκτιμάται ως το εκατοστιαίο ποσοστό των εκφρασμένων πυρήνων σε δείγματα βαμμένα με ανοσοϊστοχημικές τεχνικές (Immunohistochemistry-IHC). Πρόσφατες μελέτες έχουν προσπαθήσει να εντοπίσουν εάν υπάρχει σύνδεση μεταξύ του ΒΔΚ του όγκου και της έκφρασης των ΟΥ στον όγκο, συσχετίζοντας τον ΒΔΚ από εικόνες με χρώση H&E με τον ποσοστό των εκφρασμένων ΟΥ σε δείγματα IHC. Αυτή η συσχέτιση φαίνεται να είναι σημαντική στις διάφορες ακολουθούμενες στρατηγικές για τη θεραπεία του καρκίνου του μαστού. Εντούτοις, ο προσδιορισμός της έκφρασης των ΟΥ παρουσιάζει ορισμένες αδυναμίες: α) υπάρχει σημαντική μεταβλητότητα μεταξύ των ειδικών σχετικά με το πρωτόκολλο που ακολουθείται για τον υπολογισμό της έκφρασης των ΟΥ, β) είναι δύσκολο να εκτιμηθεί με ακρίβεια η έκφραση των ΟΥ, δεδομένου ότι θα απαιτούσε τη μέτρηση του συνόλου των θετικά εκφρασμένων πυρήνων από τον ειδικό ιστοπαθολόγο. Στην κλινική πράξη, λαμβάνεται συνήθως μια χονδρική εκτίμηση από τον ιστοπαθολόγο, μέσω μικροσκοπίου, παρατηρώντας αντιπροσωπευτικές περιοχές των δειγμάτων όπου υπάρχει μεγάλη συγκέντρωση εκφρασμένων πυρήνων σε ΟΥ. Ως εκ τούτου, η αξιολόγηση της έκφρασης των ΟΥ, που έχει θεωρηθεί από προηγούμενες μελέτες ως βασική μέτρηση για τη συσχέτιση μεταξύ ΟΥ και του βαθμού διαφοροποίησης των όγκων, είναι επιρρεπής στην υποκειμενικότητα του ειδικού. Για τον λόγο αυτό απαιτούνται πιο αξιόπιστες μέθοδοι. Η παρούσα διατριβή πραγματοποιήθηκε σε αναζήτηση εναλλακτικών, πιο αξιόπιστων μεθόδων. Έτσι οι στόχοι της παρούσας διατριβής είναι: (i) η ανάπτυξη μιας αξιόπιστης μεθοδολογίας τμηματοποίησης ιστολογικών εικόνων μικροσκοπίας επεξεργασμένες με χρώση IHC για τον εντοπισμό των πυρήνων που εκφράζουν τους ΟΥ για την αντικειμενική ποσοτικοποίηση της έκφρασης των ΟΥ στον καρκίνο του μαστού, (ii) η διερεύνηση ενδεχόμενης σχέσης μεταξύ της έκφρασης των ΟΥ και του ΒΔΚ του όγκου, συνδυάζοντας την πληροφορία των ιστολογικών δειγμάτων, που προέρχονται από τον καρκινικό ιστό του ίδιου ασθενούς και έχουν υποστεί επεξεργασία με ανοσοϊστοχημική χρώση και με χρώση H&E, (iii) η διερεύνηση πιθανής συσχέτισης στις μεταβολές της υφής της χρωματίνης με τις μεταβολές στην υφή των πυρήνων που εκφράζουν τους ΟΥ, και (iv) η διερεύνηση της δυνατότητας της προτεινόμενης μεθοδολογίας σε άλλους τομείς επεξεργασίας και ανάλυσης ιατρικών εικόνων. Για την εκπλήρωση των ανωτέρω στόχων και σε αναζήτηση αξιόπιστων μεθόδων για την ποσοτικοποίηση της έκφρασης των ΟΥ και της σύνδεσή της με το ΒΔΚ του όγκου, σχεδιάστηκε, αναπτύχθηκε και εφαρμόστηκε μια νέα μεθοδολογία βασισμένη στην αναγνώριση προτύπων ημι-εποπτευόμενης μάθησης για την ανάλυση ιστοπαθολογικής εικόνας. Επιπλέον, η κατάλληλη τροποποίηση της προτεινόμενης μεθόδου μπορεί να οδηγήσει στη γενίκευση της μεθοδολογικής προσέγγισης της ταξινόμησης εικονοστοιχείων για την επεξεργασία και την ανάλυση ιατρικών εικόνων, πέρα αυτών της μικροσκοπίας, όπως εικόνες από Aγγειογραφία Υπολογιστικής Τομογραφίας.
APA, Harvard, Vancouver, ISO, and other styles
50

Gong, Sha. "Pixel Classification Algorithms for Noise Removal and Signal Preservation in Low Pass Filtering." Thesis, 2013. http://spectrum.library.concordia.ca/978071/1/Gong_MASc_S2014.pdf.

Full text
Abstract:
Contrast enhancement is essential to improve the image quality in most of image pre-processing. A histogram equalization process can be used to achieve a high contrast. It causes, however, also noise generation. Involving a low-pass filtering process is an effective way to achieve a high-quality contrast enhancement with low-noise, but it leads to the conflict between noise removal and signal preservation. To perform discriminative low-pass filtering operations with the presence of noises and signal variations in different regions, it is thus necessary to develop good algorithms to classify the pixels. In this thesis, two classification algorithms are proposed. They aim at low-contrast images where gradient signals are severely degraded by various causes during the acquisition process. They are to classify the pixels according to the initial gray-level homogeneity of their regions. The basic classification method is done by gradient thresholding, and the threshold values are generated by means of gradient distribution analysis. To tackle the problems of various gradient degradation patterns in low-contrast images, image pixels are grouped in a particular way that, in the same group, pixels in homogeneous regions can be easily distinguished from those in non-homogeneous regions by the basic method of simple gradient thresholding. Two algorithms based on different grouping methods are proposed. The first algorithm aims at high dynamic range images. The pixels are first grouped according to their gray-level ranges, as the gradient degradation is, in such a case, gray-level-dependent. The gradient distribution of each sub-range is obtained and a pixel classification is then made to adapt to their original gray-level signals in the sub-range. The other algorithm is to tackle a wider range of low-contrast images. In this algorithm, a gray-level histogram thresholding is performed to divide the pixels into two groups according to their likelihood to homogeneous, or non-homogeneous, pixels. Thus, in one group a majority of homogeneous pixels is established and in the other group the majority is of non-homogeneous pixels. The classification done in each group is to identify those in the minority. Both proposed algorithms are very simple in computation and each of them is incorporated into the contrast enhancement procedure to make the integrated low-pass filters effectively remove the noise generated in the histogram equalization while well preserving the signal details. The simulation results demonstrates, by subjective observation and objective measurements, that the proposed algorithms lead to a superior quality of the contrast enhancement for varieties of images, with respect to two advanced enhancement schemes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography