To see the other types of publications on this topic, follow the link: Image processing Mathematical statistics.

Dissertations / Theses on the topic 'Image processing Mathematical statistics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Image processing Mathematical statistics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stephens, David A. "Bayesian edge-detection in image processing." Thesis, University of Nottingham, 1990. http://eprints.nottingham.ac.uk/11723/.

Full text
Abstract:
Problems associated with the processing and statistical analysis of image data are the subject of much current interest, and many sophisticated techniques for extracting semantic content from degraded or corrupted images have been developed. However, such techniques often require considerable computational resources, and thus are, in certain applications, inappropriate. The detection localised discontinuities, or edges, in the image can be regarded as a pre-processing operation in relation to these sophisticated techniques which, if implemented efficiently and successfully, can provide a means for an exploratory analysis that is useful in two ways. First, such an analysis can be used to obtain quantitative information relating to the underlying structures from which the various regions in the image are derived about which we would generally be a priori ignorant. Secondly, in cases where the inference problem relates to discovery of the unknown location or dimensions of a particular region or object, or where we merely wish to infer the presence or absence of structures having a particular configuration, an accurate edge-detection analysis can circumvent the need for the subsequent sophisticated analysis. Relatively little interest has been focussed on the edge-detection problem within a statistical setting. In this thesis, we formulate the edge-detection problem in a formal statistical framework, and develop a simple and easily implemented technique for the analysis of images derived from two-region single edge scenes. We extend this technique in three ways; first, to allow the analysis of more complicated scenes, secondly, by incorporating spatial considerations, and thirdly, by considering images of various qualitative nature. We also study edge reconstruction and representation given the results obtained from the exploratory analysis, and a cognitive problem relating to the detection of objects modelled by members of a class of simple convex objects. Finally, we study in detail aspects of one of the sophisticated image analysis techniques, and the important general statistical applications of the theory on which it is founded.
APA, Harvard, Vancouver, ISO, and other styles
2

Silwal, Sharad Deep. "Bayesian inference and wavelet methods in image processing." Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/2355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kravchuk, Olena. "Trigonometric scores rank procedures with applications to long-tailed distributions /." [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19314.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qiao, Tong. "Statistical detection for digital image forensics." Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0006/document.

Full text
Abstract:
Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux jouent un rôle de plus en plus important. Les logiciels sophistiqués de retouche d’images se sont démocratisés et permettent de diffuser facilement des images falsifiées. Ceci pose un problème sociétal puisqu’il s’agit de savoir si ce que l’on voit a été manipulé. Cette thèse s'inscrit dans le cadre de la criminalistique des images. Trois problèmes sont abordés : l'identification de l'origine d'une image, la détection d'informations cachées dans une image et la détection d'un exemple falsification : le rééchantillonnage. Ces travaux s'inscrivent dans le cadre de la théorie de la décision statistique et proposent la construction de détecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d'atteindre une performance de détection élevée, il est proposé d'exploiter les propriétés des images naturelles en modélisant les principales étapes de la chaîne d'acquisition d'un appareil photographique. La méthodologie, tout au long de ce manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de vraisemblance dans le contexte idéal où tous les paramètres du modèle sont connus. Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés afin de construire le test du rapport de vraisemblance généralisé dont les performances statistiques sont analytiquement établies. De nombreuses expérimentations sur des images simulées et réelles permettent de souligner la pertinence de l'approche proposée
The remarkable evolution of information technologies and digital imaging technology in the past decades allow digital images to be ubiquitous. The tampering of these images has become an unavoidable reality, especially in the field of cybercrime. The credibility and trustworthiness of digital images have been eroded, resulting in important consequences in terms of political, economic, and social issues. To restore the trust to digital images, the field of digital forensics was born. Three important problems are addressed in this thesis: image origin identification, detection of hidden information in a digital image and an example of tampering image detection : the resampling. The goal is to develop a statistical decision approach as reliable as possible that allows to guarantee a prescribed false alarm probability. To this end, the approach involves designing a statistical test within the framework of hypothesis testing theory based on a parametric model that characterizes physical and statistical properties of natural images. This model is developed by studying the image processing pipeline of a digital camera. As part of this work, the difficulty of the presence of unknown parameters is addressed using statistical estimation, making the application of statistical tests straightforward in practice. Numerical experiments on simulated and real images have highlighted the relevance of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Pennec Erwan. "Some (statistical) applications of Ockham's principle." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00802653.

Full text
Abstract:
Ce manuscrit présente mes contributions scientifiques de ces dix dernières années à l'interface entre traitement d'image et statistique. Il débute par l'étude d'un exemple jouet, l'estimation de la moyenne d'un vecteur gaussien, qui permet de présenter le type de question statistique auquel je me suis intéressé, de souligner l'importance de la théorie de l'approximation et de présenter le principe de parcimonie d'Ockham. Après une brève description de l'ensemble des contributions, le manuscrit s'organise autour des modèles statistiques que j'ai pu rencontrés: modèle de bruit blanc, modèle de densité et modèle de densité conditionnelle.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Pei. "An investigation of statistical aspects of linear subspace analysis for computer vision applications." Monash University, Dept. of Electrical and Computer Systems Engineering, 2004. http://arrow.monash.edu.au/hdl/1959.1/9705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hill, Evelyn June. "Applying statistical and syntactic pattern recognition techniques to the detection of fish in digital images." University of Western Australia. School of Mathematics and Statistics, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0070.

Full text
Abstract:
This study is an attempt to simulate aspects of human visual perception by automating the detection of specific types of objects in digital images. The success of the methods attempted here was measured by how well results of experiments corresponded to what a typical human’s assessment of the data might be. The subject of the study was images of live fish taken underwater by digital video or digital still cameras. It is desirable to be able to automate the processing of such data for efficient stock assessment for fisheries management. In this study some well known statistical pattern classification techniques were tested and new syntactical/ structural pattern recognition techniques were developed. For testing of statistical pattern classification, the pixels belonging to fish were separated from the background pixels and the EM algorithm for Gaussian mixture models was used to locate clusters of pixels. The means and the covariance matrices for the components of the model were used to indicate the location, size and shape of the clusters. Because the number of components in the mixture is unknown, the EM algorithm has to be run a number of times with different numbers of components and then the best model chosen using a model selection criterion. The AIC (Akaike Information Criterion) and the MDL (Minimum Description Length) were tested.The MDL was found to estimate the numbers of clusters of pixels more accurately than the AIC, which tended to overestimate cluster numbers. In order to reduce problems caused by initialisation of the EM algorithm (i.e. starting positions of mixtures and number of mixtures), the Dynamic Cluster Finding algorithm (DCF) was developed (based on the Dog-Rabbit strategy). This algorithm can produce an estimate of the locations and numbers of clusters of pixels. The Dog-Rabbit strategy is based on early studies of learning behaviour in neurons. The main difference between Dog-Rabbit and DCF is that DCF is based on a toroidal topology which removes the tendency of cluster locators to migrate to the centre of mass of the data set and miss clusters near the edges of the image. In the second approach to the problem, data was extracted from the image using an edge detector. The edges from a reference object were compared with the edges from a new image to determine if the object occurred in the new image. In order to compare edges, the edge pixels were first assembled into curves using an UpWrite procedure; then the curves were smoothed by fitting parametric cubic polynomials. Finally the curves were converted to arrays of numbers which represented the signed curvature of the curves at regular intervals. Sets of curves from different images can be compared by comparing the arrays of signed curvature values, as well as the relative orientations and locations of the curves. Discrepancy values were calculated to indicate how well curves and sets of curves matched the reference object. The total length of all matched curves was used to indicate what fraction of the reference object was found in the new image. The curve matching procedure gave results which corresponded well with what a human being being might observe.
APA, Harvard, Vancouver, ISO, and other styles
8

Patenaude, Brian Matthew. "Bayesian statistical models of shape and appearance for subcortical brain segmentation." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:52f5fee0-60e8-4387-9560-728843e187b3.

Full text
Abstract:
Our motivation is to develop an automated technique for the segmentation of sub-cortical human brain structures from MR images. To this purpose, models of shape-and-appearance are constructed and fit to new image data. The statistical models are trained from 317 manually labelled T1-weighted MR images. Shape is modelled using a surface-based point distribution model (PDM) such that the shape space is constrained to the linear combination of the mean shape and eigenvectors of the vertex coordinates. In addition, to model intensity at the structural boundary, intensities are sampled along the surface normal from the underlying image. We propose a novel Bayesian appearance model whereby the relationship between shape and intensity are modelled via the conditional distribution of intensity given shape. Our fully probabilistic approach eliminates the need for arbitrary weightings between shape and intensity as well as for tuning parameters that specify the relative contribution between the use of shape constraints and intensity information. Leave-one-out cross-validation is used to validate the model and fitting for 17 structures. The PDM for shape requires surface parameterizations of the volumetric, manual labels such that vertices retain a one-to-one correspondence across the training subjects. Surface parameterizations with correspondence are generated through the use of deformable models under constraints that embed the correspondence criterion within the deformation process. A novel force that favours equal-area triangles throughout the mesh is introduced. The force adds stability to the mesh such that minimal smoothing or within-surface motion is required. The use of the PDM for segmentation across a series of subjects results in a set surfaces that retain point correspondence. The correspondence facilitates landmark-based shape analysis. Amongst other metrics, vertex-wise multivariate statistics and discriminant analysis are used to investigate local and global size and shape differences between groups. The model is fit, and shape analysis is applied to two clinical datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Velasco-Forero, Santiago. "Contributions en morphologie mathématique pour l'analyse d'images multivariées." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00820581.

Full text
Abstract:
Cette thèse contribue au domaine de la morphologie mathématique et illustre comment la statistique multivariée et les techniques d'apprentissage numérique peuvent être exploitées pour concevoir un ordre dans l'espace des vecteurs et pour inclure les résultats d'opérateurs morphologiques au processus d'analyse d'images multivariées. En particulier, nous utilisons l'apprentissage supervisé, les projections aléatoires, les représentations tensorielles et les transformations conditionnelles pour concevoir de nouveaux types d'ordres multivariés et de nouveaux filtres morphologiques pour les images multi/hyperspectrales. Nos contributions clés incluent les points suivants :* Exploration et analyse d'ordre supervisé, basé sur les méthodes à noyaux.* Proposition d'un ordre nonsupervisé, basé sur la fonction de profondeur statistique calculée par projections aléatoires. Nous commençons par explorer les propriétés nécessaires à une image pour assurer que l'ordre ainsi que les opérateurs morphologiques associés, puissent être interprétés de manière similaire au cas d'images en niveaux de gris. Cela nous amènera à la notion de décomposition en arrière plan. De plus, les propriétés d'invariance sont analysées et la convergence théorique est démontrée.* Analyse de l'ordre supervisé dans les problèmes de correspondance morphologique de patrons, qui correspond à l'extension de l'opérateur tout-ou-rien aux images multivariées grâce à l'utilisation de l'ordre supervisé.* Discussion sur différentes stratégies pour la décomposition morphologique d'images. Notamment, la décomposition morphologique additive est introduite comme alternative pour l'analyse d'images de télédétection, en particulier pour les tâches de réduction de dimension et de classification supervisée d'images hyperspectrales de télédétection.* Proposition d'un cadre unifié basé sur des opérateurs morphologiques, pour l'amélioration de contraste et pour le filtrage du bruit poivre-et-sel.* Introduction d'un nouveau cadre de modèles Booléens multivariés en utilisant une formulation en treillis complets. Cette contribution théorique est utile pour la caractérisation et la simulation de textures multivariées.
APA, Harvard, Vancouver, ISO, and other styles
10

Buehler, Patrick. "Automatic learning of British Sign Language from signed TV broadcasts." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:2930e980-4307-41bf-b4ff-87e8c4d0d722.

Full text
Abstract:
In this work, we will present several contributions towards automatic recognition of BSL signs from continuous signing video sequences. Specifically, we will address three main points: (i) automatic detection and tracking of the hands using a generative model of the image; (ii) automatic learning of signs from TV broadcasts using the supervisory information available from subtitles; and (iii) generalisation given sign examples from one signer to recognition of signs from different signers. Our source material consists of many hours of video with continuous signing and corresponding subtitles recorded from BBC digital television. This is very challenging material for a number of reasons, including self-occlusions of the signer, self-shadowing, blur due to the speed of motion, and in particular the changing background. Knowledge of the hand position and hand shape is a pre-requisite for automatic sign language recognition. We cast the problem of detecting and tracking the hands as inference in a generative model of the image, and propose a complete model which accounts for the positions and self-occlusions of the arms. Reasonable configurations are obtained by efficiently sampling from a pictorial structure proposal distribution. The results using our method exceed the state-of-the-art for the length and stability of continuous limb tracking. Previous research in sign language recognition has typically required manual training data to be generated for each sign, e.g. a signer performing each sign in controlled conditions - a time-consuming and expensive procedure. We show that for a given signer, a large number of BSL signs can be learned automatically from TV broadcasts using the supervisory information available from subtitles broadcast simultaneously with the signing. We achieve this by modelling the problem as one of multiple instance learning. In this way we are able to extract the sign of interest from hours of signing footage, despite the very weak and "noisy" supervision from the subtitles. Lastly, we show that automatic recognition of signs can be extended to multiple signers. Using automatically extracted examples from a single signer, we train discriminative classifiers and show that these can successfully classify and localise signs in new signers. This demonstrates that the descriptor we extract for each frame (i.e. hand position, hand shape, and hand orientation) generalises between different signers.
APA, Harvard, Vancouver, ISO, and other styles
11

Dias, Felipe de Assis. "Increasing image resolution for wire-mesh sensor based on statistical reconstruction." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2880.

Full text
Abstract:
CNPq; FUNTEF-PR
Sensores wire-mesh (WMS) são capazes de gerar imagens da seção transversal de escoamentos multifásicos e tem sido amplamente utilizados para investigar fenômenos de escoamentos em plantas piloto. Tais dispositivos são capazes de medir parâmetros de escoamento tais como distribuição da fração de fase (por exemplo fração de gás ou líquido) e visualizar escoamentos multifásicos com alta resolução temporal e espacial. Sendo portanto, uma ferramenta importante para investigações de escoamentos mais detalhadas. No entanto, seu princípio de medição é baseado em eletrodos intrusivos posicionados dentro do tubo onde o escoamento flui. A resolução da imagem gerada pelo sensor é dada pelo número de cruzamentos entre os fios transmissores e receptores. Em muitos processos, no entanto, efeitos de intrusividade de tal sensor pode ser uma limitação no seu uso. Por isso, um número reduzido de fios poderia permitir uma expansão do campo de aplicações do sensor wire-mesh. Por essa razão, o presente trabalho sugere um método de reconstrução de imagem para aumentar a resolução dos dados de um sensor wire-mesh com um número de eletrodos menor que o ótimo. Desta forma, os efeitos de intrusividade no processo investigado poderiam ser reduzidos. O método de reconstrução é baseado em uma abordagem estatística de regularização e é conhecido como Maximum a Posteriori (MAP). Dados de escoamento de um WMS 16x16 são usados para determinar um modelo gaussiano multivariável do escoamento, o qual são empregados como regularização na reconstrução. Uma matriz de sensitividade é estimada pelo método de elementos finitos (FEM) para incorporar o algoritmo MAP. Dados experimentais são usados para validar o método proposto, sendo comparado com interpolação do tipo spline. Resultados experimentais mostram que a reconstrução por MAP possui um desempenho melhor do que interpolação do tipo spline, alcançando desvios de fração de vazio dentro de uma faixa de ± 10% na grande maioria dos pontos de operação. A validação foi executada em um loop de escoamento horizontal água/gás em regime intermitente (golfada).
Wire-mesh sensors (WMS) are able to generate cross-sectional images of multiphase flow and have been widely used to investigate flow phenomena in pilot plant studies. Such devices are able to measure flow parameters such as phase fraction (e.g. gas/liquid fraction) distribution and visualize multiphase flows with high temporal and spatial resolution. Hence, being important tool for detailed flow investigation. However, its sensing principle is based on intrusive electrodes placed inside the pipe where a multiphase flow streams. The image resolution generated by the sensor is given by the number of crossing points formed by the transmitter and receptor wires. In many processes, however, the intrusive effect of such sensor might be a limitation on its use. Therefore, a reduced number of wires could possibly increase the application field of wire-mesh sensors. For this reason, the present work presents an image reconstruction method to increase resolution of WMS data with less than optimal number of electrode wires. In this way, a reduction of intrusive effects on the process under investigation may be achieved. The reconstruction method is based on statistical view of regularization and is known as Maximum a Posterior (MAP). 16x16 WMS flow data are used to determine a Multivariate Gaussian flow model, which in turn is used as regularization in the reconstruction. A sensitive matrix is estimated by finite element method (FEM) to incorporate MAP algorithm. Experimental data are used to validate the proposed method, which is compared with spline interpolation. Experimental results show that the MAP reconstruction performs better than interpolation and achieves deviation in gas void fraction estimation in the range of ±10% in the vast majority of operating points. The tests were performed in a horizontal water-gas flow loop operating at intermittent (slug) flow regime.
APA, Harvard, Vancouver, ISO, and other styles
12

Sadler, Rohan. "Image-based modelling of pattern dynamics in a semiarid grassland of the Pilbara, Australia." University of Western Australia. School of Plant Biology, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0155.

Full text
Abstract:
[Truncated abstract] Ecologists are increasingly interested in quantifying local interacting processes and their impacts on spatial vegetation patterns. In arid and semiarid ecosystems, theoretical models (often spatially explicit) of dynamical system behaviour have been used to provide insight into changes in vegetation patterning and productivity triggered by ecological events, such as fire and episodic rainfall. The incorporation of aerial imagery of vegetation patterning into current theoretical model remains a challenge, as few theoretical models may be inferred directly from ecological data, let alone imagery. However, if conclusions drawn from theoretical models were well supported by image data then these models could serve as a basis for improved prediction of complex ecosystem behaviour. The objective of this thesis is therefore to innovate methods for inferring theoretical models of vegetation dynamics from imagery. ... These results demonstrate how an ad hoc inference procedure returns biologically meaningful parameter estimates for a germ-grain model of T. triandra vegetation patterning, with VLSA photography as data. Various aspects of the modelling and inference procedures are discussed in the concluding chapter, including possible future extensions and alternative applications for germ-grain models. I conclude that the state-and-transition model provides an effective exploration of an ecosystem?s dynamics, and complements spatially explicit models designed to test specific ecological mechanisms. Significantly, both types of models may now be inferred from image data through the methodologies I have developed, and can provide an empirical basis to theoretical models of complex vegetation dynamics used in understanding and managing arid (and other) ecological systems.
APA, Harvard, Vancouver, ISO, and other styles
13

Cogranne, Rémi. "Détection statistique d'informations cachées dans une image naturelle à partir d'un modèle physique." Phd thesis, Université de Technologie de Troyes, 2012. http://tel.archives-ouvertes.fr/tel-00706171.

Full text
Abstract:
Avec l'avènement de l'informatique grand public, du réseau Internet et de la photographie numérique, de nombreuses images naturelles (acquises par un appareil photographique) circulent un peu partout dans le monde. Les images sont parfois modi- fiées de façon légitime ou illicite pour transmettre une information confidentielle ou secrète. Dans ce contexte, la sténographie constitue une méthode de choix pour trans- mettre et dissimuler de l'information. Par conséquent, il est nécessaire de détecter la présence d'informations cachées dans des images naturelles. L'objectif de cette thèse est de développer une nouvelle approche statistique pour effectuer cette détection avec la meilleure fiabilité possible. Dans le cadre de ces travaux, le principal enjeu est la maîtrise des probabilités d'erreur de détection. Pour cela, un modèle paramétrique localement non-linéaire d'une image naturelle est développé. Ce modèle est construit à partir de la physique du système optique d'acquisition et de la scène imagée. Quand les paramètres de ce modèle sont connus, un test statistique théorique est proposé et ses propriétés d'optimalité sont établies. La difficulté principale de la construction de ce test repose sur le fait que les pixels de l'image sont toujours quantifiés. Lorsqu'aucune information sur l'image n'est disponible, il est proposé de linéariser le modèle tout en respectant la contrainte sur la probabilité de fausse alarme et en garantissant une perte d'optimalité bornée. De nombreuses expérimentations sur des images réelles ont confirmé la pertinence de cette nouvelle approche.
APA, Harvard, Vancouver, ISO, and other styles
14

Thai, Thanh Hai. "Modélisation et détection statistiques pour la criminalistique numérique des images." Phd thesis, Université de Technologie de Troyes, 2014. http://tel.archives-ouvertes.fr/tel-01072541.

Full text
Abstract:
Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux jouent maintenant un rôle de plus en plus important dans la vie de tous les jours. De la même manière, les logiciels sophistiqués de retouche d'images se sont démocratisés et permettent aujourd'hui de diffuser facilement des images falsifiées. Ceci pose un problème sociétal puisqu'il s'agit de savoir si ce que l'on voit a été manipulé. Cette thèse s'inscrit dans le cadre de la criminalistique des images numériques. Deux problèmes importants sont abordés : l'identification de l'origine d'une image et la détection d'informations cachées dans une image. Ces travaux s'inscrivent dans le cadre de la théorie de la décision statistique et roposent la construction de détecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d'atteindre une performance de détection élevée, il est proposé d'exploiter les propriétés des images naturelles en modélisant les principales étapes de la chaîne d'acquisition d'un appareil photographique. La éthodologie, tout au long de ce manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de vraisemblance dans le contexte idéal où tous les aramètres du modèle sont connus. Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés afin de construire le test du rapport de vraisemblance généralisé dont les erformances statistiques sont analytiquement établies. De nombreuses expérimentations sur des images simulées et réelles permettent de souligner la pertinence de l'approche proposée.
APA, Harvard, Vancouver, ISO, and other styles
15

Seacrest, Tyler. "Mathematical Models of Image Processing." Scholarship @ Claremont, 2006. https://scholarship.claremont.edu/hmc_theses/188.

Full text
Abstract:
The purpose of this thesis is to develop various advanced linear algebra techniques that apply to image processing. With the increasing use of computers and digital photography, being able to manipulate digital images efficiently and with greater freedom is extremely important. By applying the tools of linear algebra, we hope to improve the ability to process such images. We are especially interested in developing techniques that allow computers to manipulate images with the least amount of human guidance. In Chapter 2 and Chapter 3, we develop the basic definitions and linear algebra concepts that lay the foundation for later chapters. Then, in Chapter 4, we demonstrate techniques that allow a computer to rotate an image to the correct orientation automatically, and similarly, for the computer to correct a certain class of color distortion automatically. In both cases, we use certain properties of the eigenvalues and eigenvectors of covariance matrices. We then model color clashing and color variation in Chapter 5 using a powerful tool from linear algebra known as the Perron-Frobenius theorem. Finally, we explore ways to determine whether an image is a blur of another image using invariant functions. The inspiration behind these functions are recent applications of Lie Groups and Lie algebra to image processing.
APA, Harvard, Vancouver, ISO, and other styles
16

Randolph, Tami Rochele. "Image compression and classification using nonlinear filter banks." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/13439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Yao, Jianfeng. "Estiamation et fluctuations de fonctionnelles de grandes matrices aléatoires." Phd thesis, Telecom ParisTech, 2013. http://tel.archives-ouvertes.fr/tel-00909521.

Full text
Abstract:
L'objectif principal de la thèse est : l'étude des fluctuations de fonctionnelles du spectre de grandes matrices aléatoires, la construction d'estimateurs consistants et l'étude de leurs performances, dans la situation où la dimension des observations est du même ordre que le nombre des observations disponibles. Il y aura deux grandes parties dans cette thèse. La première concerne la contribution méthodologique. Nous ferons l'étude des fluctuations pour les statistiques linéaires des valeurs propres du modèle 'information-plus-bruit' pour des fonctionnelles analytiques, et étendrons ces résultats au cas des fonctionnelles non analytiques. Le procédé d'extension est fondé sur des méthodes d'interpolation avec des quantités gaussiennes. Ce procédé est appliqué aux grandes matrices de covariance empirique. L'autre grande partie sera consacrée à l'estimation des valeurs propres de la vraie covariance à partir d'une matrice de covariance empirique en grande dimension et l'étude de son comportement. Nous proposons un nouvel estimateur consistant et étudions ces fluctuations. En communications sans fil, cette procédure permet à un réseau secondaire d'établir la présence de ressources spectrales disponibles.
APA, Harvard, Vancouver, ISO, and other styles
18

Dong, Li. "Noise level estimation from single image based on natural image statistics." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3952094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sayrol, Clols Elisa. "Higher-order statistics applications in image sequence processing." Doctoral thesis, Universitat Politècnica de Catalunya, 1994. http://hdl.handle.net/10803/6950.

Full text
Abstract:
Aqueta tesi tracta dues aplicacions dels estadístics d'ordre superior al tractament d'imatges.En primer lloc, es proposa l'ús de mètodes basats en estadístics d'ordre superior per a larestauració d'imatges. Primerament, es consideren imatges degradades per filtres de blurringde fase lineal o zero i soroll Gaussià aditiu. S'examina un segon model de degradació perimatges astronòmiques on el blurring es causat per les turbulències de l'atmosfera i lesaberracions del telescopi. L'estratègia de restauració en amdós casos es basa en el fet de que lafase del senyal original i la dels seus estadístics d'ordre superior no es ditorsionen per lafunció de blurring. Les dificultats associades a combinar senyals de dues dimensions i elsseus estadístics d'ordre superior, es redueixen gràcies a la utilització de la transformada deRadon. La projecció a cada angle de la imatge de dues dimensions és un senyal d'unadimensió que pot ser processada per qualsevol mètode de reconstrucció d'una dimensió. Enaquesta part de la tesi es desenvolupen mètodes que utilitzen el Bicepstrum IterativeReconstruction Algorithm i el Weight Slice Algorithm. Un cop es reconstrueixen lesprojeccions originals, la transformada inversa de Radon ens dóna la imatge restaurada.En la segona part de la tesi es proposa una classe de funcions de cost, basades novament enestadístics d'ordre superior, per estimar el vector de moviment entre imatges consecutivesd'una seqüència. En cas de que les imatges estiguin degradades per soroll Gaussià aditiu decovariancia desconeguda, la utilització d'estadístics d'ordre superior és molt apropiada ja queels cumulants de processos Gaussians són nuls. Per a obtenir estimacions consistents esnecessiten varies realitzacions de la mateixa seqüència, cosa que generalment no és possible.Tanmateix, imatges prèvies de la seqüència on el problema d'estimació del moviment ja s'haresolt, poden ser utilitzades per a obtenir estimacions assimptòticament no esbiaixades. Aixòes possible quan es pot suposar estacionaritat entre les imatges de la seqüència empreades.L'objectiu d'aquesta part del treball d'investigació es l'ús de tècniques basades en estadísticsd'ordre superior que puguin estimar moviment fins i tot per a regions o blocs relativamentpetits. Es defineix també una estimació alternativa quan només es disposa de dues imatges,que supera altres tècniques existents. Finalment es desenvolupa una versió recursiva per casosen què es tingui accés a informació a priori.
APA, Harvard, Vancouver, ISO, and other styles
20

Kongara, Naga Rama Mohana Rao. "Application of mathematical morphology to biomedical image processing." Thesis, University of Westminster, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Sedaaghi, Mohammad Hossein. "Morphological filtering in signal/image processing." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Steinberg, Eran. "Analysis of random halftone dithering using second order statistics /." Online version of thesis, 1991. http://hdl.handle.net/1850/10976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Yi. "Blur Image Processing." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1448384360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Thiede, Renate Nicole. "Statistical accuracy of an extraction algorithm for linear image objects." Diss., University of Pretoria, 2019. http://hdl.handle.net/2263/73211.

Full text
Abstract:
Informal unpaved roads in developing countries arise naturally through human movement and informal housing setups. These roads are not authorised nor maintained by council, nor recorded in official databases or online maps. Mapping such roads from satellite images is a common problem, as information on these roads is critical for sustainable city growth. Information on their location and extent may be gleaned from spatial big data, however, no automatic or semi-automatic approach is freely available. This research develops a novel algorithm for extracting informal roads from multispectral satellite images, using physical road characteristics. These include near-infrared reflectance, addressed via the NDVI index, shape, addressed via measures of compactness and elongation, and grey-value intensity. The crux of the algorithm is the Discrete Pulse Transform, implemented via the Roadmaker's Pavage. The algorithm provides a classification of road objects, along with an associated uncertainty measure for each road object. Accuracy is assessed using per-pixel assessment metrics and metrics based on road characteristics, including completeness, correctness, and Pratt's Figure of Merit, which is applied to road extraction accuracy for the first time. The algorithm is applied to areas in Gauteng and North West Provinces, South Africa. Sources of uncertainty and error are discussed, such as indefinite boundaries, surface type heterogeneity, trees and shadows.
Mini Dissertation (MSc)--University of Pretoria, 2019.
Acknowledgement of the National Research Foundation for the funding provided through the NRF-SASA Crisis in Academic Statistics grant.
Statistics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
25

Chung, François. "Modélisation de l'apparence de régions pour la segmentation d'images basée modèle." Phd thesis, École Nationale Supérieure des Mines de Paris, 2011. http://pastel.archives-ouvertes.fr/pastel-00575796.

Full text
Abstract:
Cette thèse est consacrée à un nouveau modèle d'apparence pour la segmentation d'images basée modèle. Ce modèle, dénommé Multimodal Prior Appearance Model (MPAM), est construit à partir d'une classification EM de profils d'intensité combinée avec une méthode automatique pour déterminer le nombre de classes. Contrairement aux approches classiques basées ACP, les profils d'intensité sont classifiés pour chaque maillage et non pour chaque sommet. Tout d'abord, nous décrivons la construction du MPAM à partir d'un ensemble de maillages et d'images. La classification de profils d'intensité et la détermination du nombre de régions par un nouveau critère de sélection sont expliquées. Une régularisation spatiale pour lisser la classification est présentée et la projection de l'information d'apparence sur un maillage de référence est décrite. Ensuite, nous présentons une classification de type spectrale dont le but est d'optimiser la classification des profils pour la segmentation. La représentation de la similitude entre points de données dans l'espace spectral est expliquée. Des résultats comparatifs sur des profils d'intensité du foie à partir d'images tomodensitométriques montrent que notre approche surpasse les modèles basés ACP. Finalement, nous présentons des méthodes d'analyse pour les structures des membres inférieurs à partir d'images IRM. D'abord, notre technique pour créer des modèles spécifiques aux sujets pour des simulations cinématiques des membres inférieurs est décrite. Puis, la performance de modèles statistiques est comparée dans un contexte de segmentation des os lorsqu'un faible ensemble de données est disponible.
APA, Harvard, Vancouver, ISO, and other styles
26

Deng, Hao. "Mathematical approaches to digital color image denoising." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31708.

Full text
Abstract:
Thesis (Ph.D)--Mathematics, Georgia Institute of Technology, 2010.
Committee Chair: Haomin Zhou; Committee Member: Luca Dieci; Committee Member: Ronghua Pan; Committee Member: Sung Ha Kang; Committee Member: Yang Wang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
27

Long, Zhiling. "Statistical image modeling in the contourlet domain with application to texture segmentation." Diss., Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-11082007-161335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Baddeley, Roland. "Visual statistics using neural networks." Thesis, University of Stirling, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Zhi. "Variational image segmentation, inpainting and denoising." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/292.

Full text
Abstract:
Variational methods have attracted much attention in the past decade. With rigorous mathematical analysis and computational methods, variational minimization models can handle many practical problems arising in image processing, such as image segmentation and image restoration. We propose a two-stage image segmentation approach for color images, in the first stage, the primal-dual algorithm is applied to efficiently solve the proposed minimization problem for a smoothed image solution without irrelevant and trivial information, then in the second stage, we adopt the hillclimbing procedure to segment the smoothed image. For multiplicative noise removal, we employ a difference of convex algorithm to solve the non-convex AA model. And we also improve the non-local total variation model. More precisely, we add an extra term to impose regularity to the graph formed by the weights between pixels. Thin structures can benefit from this regularization term, because it allows to adapt the weights value from the global point of view, thus thin features will not be overlooked like in the conventional non-local models. Since now the non-local total variation term has two variables, the image u and weights v, and it is concave with respect to v, the proximal alternating linearized minimization algorithm is naturally applied with variable metrics to solve the non-convex model efficiently. In the meantime, the efficiency of the proposed approaches is demonstrated on problems including image segmentation, image inpainting and image denoising.
APA, Harvard, Vancouver, ISO, and other styles
30

Maragos, Petros A. "A unified theory of translation-invariant systems with applications to morphological analysis and coding of images." Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/14833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Qiao, Motong. "Optimization based methods for image segmentation and image tone mapping." HKBU Institutional Repository, 2014. https://repository.hkbu.edu.hk/etd_oa/25.

Full text
Abstract:
Optimization methods have been widely utilized in the field of imaging science, such as image denoising, image segmentation, image contrast adjustment, high dynamic rang imaging, etc. In recent decades, it is becoming more and more popular to re- formulate an image processing problem into an energy minimization problem, then solve for the minimizer by some optimization based methods. In this thesis, we concern solving three popular issues in image processing and computational photography by optimization based methods, which are image segmentation, bit-depth expansion, and high dynamic range image tone mapping problems. The contribution of this thesis can be illustrated in three parts separately according to different topics. For the image segmentation problem, we present a multi-phase image segmentation model based on the histogram of the Gabor feature space, which consists of responses from a set of Gabor filters with various orientations, scales and frequencies. Our model replaces the error function term in the original fuzzy region competition model with squared 2-Wasserstein distance function, which is a metric to measure the distance of two histogram. The energy functional is minimized by alternating direction method of multiplier, and the existence of the closed-form solutions is guaranteed when the exponent of the fuzzy membership term being 1 or 2. The experimental results show the advantage of our proposed method compared to other recent methods. As for the bit-depth expansion problem, we develop a variational approach containing an energy functional to determine a local mapping function for bit-depth expansion via a smoothing technique, such that each pixel can be adjusted locally to a high bit-depth value. In order to enhance the contrast of the low bit-depth images, we make use of the histogram equalization technique for such local mapping function. Both bit-depth expansion and histogram equalization terms can be combined together into the resulting objective function. In order to minimize the differences among the local mapping functions at the nearby pixel locations, the spatial regularization of the mapping is incorporated in the objective function. Regarding the tone mapping problem for high dynamic range images, we pro- pose a computational tone mapping operator which makes use of a localized gamma correction. Our tone mapping operator combines the two subproblems in the tone mapping problem, i.e. luminance compression and color rendering, into one general framework. The bright regions and dark regions can be distinguished and treated differently. In our method, we propose two adjustment rules according to the perceptual preference of human visual system towards contrast and colors respectively. The resulting tone mapped images have a natural looking and the highest score in our observer subjective test. Based on the motivation of our computational tone mapping operator, we propose a variational method for image tone mapping problem. The core idea is to minimize the difference of the local contrast between the tone mapped image and the high dynamic range image under some constraints. The energy functional contains a local contrast fidelity term and a L-2 total variation regularization term. Local gamma correction is also applied as our previous computational model and the unknown variables are the non-uniform gamma values. The non-uniform gamma values for each pixel can be obtained by minimizing the fidelity term, while the smoothing term ensures the gamma values for nearby pixels not varying too much from each other. The results by both our computational and variational tone mapping operators show advantage in preserving the detailed image contents in the bright and dark regions. Keywords: optimization, alternating direction method of multipliers, variational model, image segmentation, Mumford-Shah model, Gabor filter, contrast adjustment, histogram equalization, bit-depth expansion, dynamic range, HDR imaging, tone mapping operators, gamma correction, color rendering.
APA, Harvard, Vancouver, ISO, and other styles
32

Corker, Thomas A. "Mathematical morphology applied to the reduction of interferograms." Thesis, Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/15506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Ting. "Mathematical morphology and its applications for still and moving image processing." Thesis, University of Liverpool, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lü, Lin, and 吕琳. "Geometric optimization for shape processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46483640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bacchuwar, Ketan. "Image processing for semantic analysis of the coronary interventions in cardiology." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1074/document.

Full text
Abstract:
L'intervention coronarienne percutanée (ICP) est réalisée en utilisant l'imagerie radiographique en temps réel dans une suite interventionnelle. La modélisation de ces procédures ICP pour aider le praticien implique la compréhension des différentes phases de la procédure ICP, par la machine d’intervention, qui peut être utilisées pour optimiser la dose de rayons X et l'agent de contraste. Pour atteindre cet objectif, l’une des tâches importantes consiste à segmenter différents outils d’intervention dans les flux d’images fluoroscopiques et à en déduire des informations sémantiques. L’arbre des composants, un puissant outil morphologique mathématique, constitue la base des méthodes de segmentation proposées. Nous présentons ce travail en deux parties: 1) la segmentation du cathéter vide à faible contraste, et 2) la segmentation de la pointe du guide et le suivi de la détection du vaisseau d’intervention. Nous présentons une nouvelle méthode de segmentation basée sur l’espace à plusieurs échelles pour détecter des objets faiblement contrastés comme un cathéter vide. Pour la dernière partie, nous présentons la segmentation de la pointe du guide avec le filtrage basé sur l’arbre de composants et proposons un algorithme pour suivre sémantiquement la pointe segmentée pour déterminer le vaisseau d’intervention
Percutaneous coronary intervention (PCI) is performed using real-time radiographic imaging in an interventional suite. Modeling these ICP procedures to help the practitioner involves understanding the different phases of the ICP procedure, by the interventional machine, which can be used to optimize the X-ray dose and the contrast agent. One of the important tasks in achieving this goal is to segment different interventional tools into the flow of fluoroscopic images and to derive semantic information from them. The component tree, a powerful mathematical morphological tool, forms the basis of the proposed segmentation methods. We present this work in two parts: 1) the segmentation of the low-contrast empty catheter, and 2) the segmentation of the tip of the guide and the monitoring of the detection of the intervention vessel. We present a new multi-scale space-based segmentation method for detecting low-contrast objects such as an empty catheter. For the last part, we present the segmentation of the tip of the guide with filtering based on the component tree and propose an algorithm to semantically follow the segmented tip to determine the intervention vessel
APA, Harvard, Vancouver, ISO, and other styles
36

Akgun, Toygar. "Resolution enhancement using natural image statistics and multiple aliased observations." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22675.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Yucel Altunbasak; Committee Member: Ghassan Alregib; Committee Member: Marcus Spruill; Committee Member: Patricio A. Vela; Committee Member: Russell M. Mersereau.
APA, Harvard, Vancouver, ISO, and other styles
37

Cui, Lei. "Topics in image recovery and image quality assessment /Cui Lei." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/368.

Full text
Abstract:
Image recovery, especially image denoising and deblurring is widely studied during the last decades. Variational models can well preserve edges of images while restoring images from noise and blur. Some variational models are non-convex. For the moment, the methods for non-convex optimization are limited. This thesis finds new non-convex optimizing method called difference of convex algorithm (DCA) for solving different variational models for various kinds of noise removal problems. For imaging system, noise appeared in images can show different kinds of distribution due to the different imaging environment and imaging technique. Here we show how to apply DCA to Rician noise removal and Cauchy noise removal. The performance of our experiments demonstrates that our proposed non-convex algorithms outperform the existed ones by better PSNR and less computation time. The progress made by our new method can improve the precision of diagnostic technique by reducing Rician noise more efficiently and can improve the synthetic aperture radar imaging precision by reducing Cauchy noise within. When applying variational models to image denoising and deblurring, a significant subject is to choose the regularization parameters. Few methods have been proposed for regularization parameter selection for the moment. The numerical algorithms of existed methods for parameter selection are either complicated or implicit. In order to find a more efficient and easier way to estimate regularization parameters, we create a new image quality sharpness metric called SQ-Index which is based on the theory of Global Phase Coherence. The new metric can be used for estimating parameters for a various of variational models, but also can estimate the noise intensity based on special models. In our experiments, we show the noise estimation performance with this new metric. Moreover, extensive experiments are made for dealing with image denoising and deblurring under different kinds of noise and blur. The numerical results show the robust performance of image restoration by applying our metric to parameter selection for different variational models.
APA, Harvard, Vancouver, ISO, and other styles
38

Westermark, Pontus. "Wavelets, Scattering transforms and Convolutional neural networks : Tools for image processing." Thesis, Uppsala universitet, Analys och sannolikhetsteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-337570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Fan. "Alternating direction methods for image recovery." HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Green, Donald R. "The utility of higher-order statistics in Gaussian noise suppression." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FGreen.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, March 2003.
Thesis advisor(s): Charles W. Therrien, Charles W. Granderson. Includes bibliographical references (p. 123). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Liyuan. "Variational approaches in image recovery and segmentation." HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/227.

Full text
Abstract:
Image recovery and segmentation are always the fundamental tasks in image processing field, because of their so many contributions in practical applications. As in the past ten years, variational methods have achieved a great success on these two issues, in this thesis, we continue to work on proposing several new variational approaches for restoring and segmenting an image. This thesis contains two parts. The first part addresses recovering an image and the second part emphasizes on segmenting. Along with the wide utilization of magnetic resonance imaging (MRI) technique, we particularly deal with blurry images corrupted by Rician noise. In chapter 1, two new convex variational models for recovering an image corrupted by Rician noise with blur are presented. These two models are motivated by the non-convex maximum-a-posteriori (MAP) model proposed in the prior papers. In the first method, we use an approximation item to the zero order of the modified Bessel function in the MAP model and add an entropy-like item to obtain a convex model. Through studying on the statistical properties of Rician noise, we bring up a strictly convex model by adding an additional data-fidelity term in the MAP model in the second method. Primal-dual methods are applied to solve the models. The simulation outcomes show that our models outperform some existed effective models in both recovery image quality and computational time. Cone beam CT (CBCT) is routinely applied in image guided radiation therapy (IGRT) to help patient setup. Its imaging dose, however, is still a concern, limiting its wide applications. It has been an active research topic to develop novel technologies for radiation dose reduction. In chapter 2, we propose an improvement of practical CBCT dose control scheme - temporal non-local means (TNLM) scheme for IGRT. We denoise the scanned image with low dose by using the previous images as prior knowledge. We combine deformation image registration and TNLM. Different from the TNLM, in the new method, for each pixel, the search range is not fixed, but based on the motion vector between the prior image and the obtained image. By doing this, it is easy to find the similar pixels in the previous images, but also can reduce the computational time since it does not need large search windows. The phantom and patient studies illuminate that the new method outperforms the original one in both image quality and computational time. In the second part, we present a two-stage method for segmenting an image corrupted by blur and Rician noise. The method is motivated by the two-stage segmentation method developed by the authors in 2013 and restoration method for images with Rician noise. First, based on the statistical properties of Rician noise, we present a new convex variant of the modified Mumford-Shah model to get the smooth cartoon part {dollar}u{dollar} of the image. Then, we cluster the cartoon {dollar}u{dollar} into different parts to obtain the final contour of different phases of the image. Moreover, {dollar}u{dollar} from the first stage is unique because of the convexity of the new model, and it needs to be computed only once whenever the thresholds and the number of the phases {dollar}K{dollar} in the second stage change. We implement the simulation on the synthetic and real images to show that our model outperforms some existed segmentation models in both precision and computational time
APA, Harvard, Vancouver, ISO, and other styles
42

Sum, Kwok-wing Anthony, and 岑國榮. "Partial differential equation based methods in medical image processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38958624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Turkmen, Muserref. "Digital Image Processing Of Remotely Sensed Oceanographic Data." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609948/index.pdf.

Full text
Abstract:
Developing remote sensing instrumentation allows obtaining information about an area rapidly and with low costs. This fact offers a challenge to remote sensing algorithms aimed at extracting information about an area from the available re¬
mote sensing data. A very typical and important problem being interpretation of satellite images. A very efficient approach to remote sensing is employing discrim¬
inant functions to distinguish different landscape classes from satellite images. Various methods on this direction are already studied. However, the efficiency of the studied methods are still not very high. In this thesis, we will improve efficiency of remote sensing algorithms. Besides we will investigate improving boundary detection methods on satellite images.
APA, Harvard, Vancouver, ISO, and other styles
44

Rau, Christian. "Curve estimation and signal discrimination in spatial problems /." View thesis entry in Australian Digital Theses Program, 2003. http://thesis.anu.edu.au/public/adt-ANU20031215.163519/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Shen, Yijiang. "Binary image restoration by positive semidefinite programming and signomial programming." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B39557431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Wei-chun. "Simulation of a morphological image processor using VHDL. mathematical components /." Online version of thesis, 1993. http://hdl.handle.net/1850/11872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hays, Peter Sipe. "A vector model for analysis, decomposition and segmentation of textures." Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-06082009-171152/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Moore, C. J. "Mathematical analysis and picture encoding methods applied to large stores of archived digital images." Thesis, University of Manchester, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wong, Ka-yan, and 王嘉欣. "Positioning patterns from multidimensional data and its applications in meteorology." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B39558630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Taboada, Fernando L. "Detection and classification of low probability of intercept radar signals using parallel filter arrays and higher order statistics." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FTaboada.pdf.

Full text
Abstract:
Thesis (M.S. in Systems Engineering)--Naval Postgraduate School, September 2002.
Thesis advisor(s): Phillip E. Pace, Herschel H. Loomis Jr. Includes bibliographical references (p. 269-270). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography