Dissertations / Theses on the topic 'Image representation methods'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 30 dissertations / theses for your research on the topic 'Image representation methods.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Chang, William. "Representation Theoretical Methods in Image Processing." Scholarship @ Claremont, 2004. https://scholarship.claremont.edu/hmc_theses/160.
Full textKarmakar, Priyabrata. "Effective and efficient kernel-based image representations for classification and retrieval." Thesis, Federation University Australia, 2018. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/165515.
Full textDoctor of Philosophy
Nygaard, Ranveig. "Shortest path methods in representation and compression of signals and image contours." Doctoral thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2000. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1182.
Full textSignal compression is an important problem encountered in many applications. Various techniques have been proposed over the years for adressing the problem. The focus of the dissertation is on signal representation and compression by the use of optimization theory, more shortest path methods.
Several new signal compression algorithms are presented. They are based on the coding of line segments which are used to spproximate, and thereby represent, the signal. These segments are fit in a way that is optimal given some constraints on the solution. By formulating the compession problem as a graph theory problem, shortest path methods can be applied in order to yeild optimal compresson with respect to the given constraints.
The approaches focused on in this dissertaion mainly have their origin in ECG comression and is often referred to as time domain compression methods. Coding by time domain methods is based on the idea of extracting a subset of significant signals samples to represent the signal. The key to a successful algoritm is a good rule for determining the most significant samples. Between any two succeeding samples in the extracted smaple set, different functions are applied in reconstruction of the signal. These functions are fitted in a wy that guaratees minimal reconstruction error under the gien constraints. Two main categories of compression schemes are developed:
1. Interpolating methods, in which it is insisted on equality between the original and reconstructed signal at the points of extraction.
2. Non-interpolating methods, where the inerpolatian restriction is released.
Both first and second order polynomials are used in reconstruction of the signal. There is solso developed an approach were multiple error measures are applied within one compression algorithm.
The approach of extracting the most significant smaples are further developed by measuring the samples in terms of the number of bits needed to encode such samples. This way we develop an approach which is optimal in the ratedistortion sense.
Although the approaches developed are applicable to any type of signal, the focus of this dissertaion is on the compression of electrodiogram (ECG) signals and image contours, ECG signal compression has traditionally been
Sampaio, de Rezende Rafael. "New methods for image classification, image retrieval and semantic correspondence." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE068/document.
Full textThe problem of image representation is at the heart of computer vision. The choice of feature extracted of an image changes according to the task we want to study. Large image retrieval databases demand a compressed global vector representing each image, whereas a semantic segmentation problem requires a clustering map of its pixels. The techniques of machine learning are the main tool used for the construction of these representations. In this manuscript, we address the learning of visual features for three distinct problems: Image retrieval, semantic correspondence and image classification. First, we study the dependency of a Fisher vector representation on the Gaussian mixture model used as its codewords. We introduce the use of multiple Gaussian mixture models for different backgrounds, e.g. different scene categories, and analyze the performance of these representations for object classification and the impact of scene category as a latent variable. Our second approach proposes an extension to the exemplar SVM feature encoding pipeline. We first show that, by replacing the hinge loss by the square loss in the ESVM cost function, similar results in image retrieval can be obtained at a fraction of the computational cost. We call this model square-loss exemplar machine, or SLEM. Secondly, we introduce a kernelized SLEM variant which benefits from the same computational advantages but displays improved performance. We present experiments that establish the performance and efficiency of our methods using a large array of base feature representations and standard image retrieval datasets. Finally, we propose a deep neural network for the problem of establishing semantic correspondence. We employ object proposal boxes as elements for matching and construct an architecture that simultaneously learns the appearance representation and geometric consistency. We propose new geometrical consistency scores tailored to the neural network’s architecture. Our model is trained on image pairs obtained from keypoints of a benchmark dataset and evaluated on several standard datasets, outperforming both recent deep learning architectures and previous methods based on hand-crafted features. We conclude the thesis by highlighting our contributions and suggesting possible future research directions
Budinich, Renato [Verfasser], Gerlind [Akademischer Betreuer] Plonka-Hoch, Gerlind [Gutachter] Plonka-Hoch, and Armin [Gutachter] Iske. "Adaptive Multiscale Methods for Sparse Image Representation and Dictionary Learning / Renato Budinich ; Gutachter: Gerlind Plonka-Hoch, Armin Iske ; Betreuer: Gerlind Plonka-Hoch." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://d-nb.info/1175625396/34.
Full textJia, Yue Verfasser], Timon [Akademischer Betreuer] Rabczuk, Klaus [Gutachter] [Gürlebeck, and Alessandro [Gutachter] Reali. "Methods based on B-splines for model representation, numerical analysis and image registration / Yue Jia ; Gutachter: Klaus Gürlebeck, Alessandro Reali ; Betreuer: Timon Rabczuk." Weimar : Institut für Strukturmechanik, 2015. http://nbn-resolving.de/urn:nbn:de:gbv:wim2-20151210-24849.
Full textJia, Yue [Verfasser], Timon [Akademischer Betreuer] Rabczuk, Klaus [Gutachter] Gürlebeck, and Alessandro [Gutachter] Reali. "Methods based on B-splines for model representation, numerical analysis and image registration / Yue Jia ; Gutachter: Klaus Gürlebeck, Alessandro Reali ; Betreuer: Timon Rabczuk." Weimar : Institut für Strukturmechanik, 2015. http://d-nb.info/1116366770/34.
Full textSjöberg, Oscar. "Evaluating Image Compression Methods on Two DimensionalHeight Representations." Thesis, Linköpings universitet, Informationskodning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171227.
Full textWei, Qi. "Bayesian fusion of multi-band images : A powerful tool for super-resolution." Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/14398/1/wei.pdf.
Full textSlobodan, Dražić. "Shape Based Methods for Quantification and Comparison of Object Properties from Their Digital Image Representations." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=107871&source=NDLTD&language=en.
Full textУ тези су размотрени развој, побољшање и евалуација метода за квантитативну карактеризацију објеката приказаних дигиталним сликама, као и мере растојања између дигиталних слика. Методе за квантитативну карактеризацију објеката представљених дигиталним сликама се све више користе у применама у којима грешка може имати критичне последице, а традиционалне методе за квантитативну карактеризацију су мале прецизности и тачности. У тези се показује да се коришћењем информације о покривеност пиксела обликом може значајно побољшати прецизност и тачност оцене растојања између две најудаљеније тачке облика мерено у датом правцу. Веома је пожељно да мера растојања између дигиталних слика може да се веже за одређену особину облика и морфолошке операције се користе приликом дефинисања растојања у ту сврху. Ипак, растојања дефинисана на овај начин показују се недовољно осетљива на релевантне податке дигиталних слика који представљају особине облика. У тези се показује да идеја адаптивне математичке морфологије може успешно да се користи да би се превазишао поменути проблем осетљивости растојања дефинисаних користећи морфолошке операције.
U tezi su razmotreni razvoj, poboljšanje i evaluacija metoda za kvantitativnu karakterizaciju objekata prikazanih digitalnim slikama, kao i mere rastojanja između digitalnih slika. Metode za kvantitativnu karakterizaciju objekata predstavljenih digitalnim slikama se sve više koriste u primenama u kojima greška može imati kritične posledice, a tradicionalne metode za kvantitativnu karakterizaciju su male preciznosti i tačnosti. U tezi se pokazuje da se korišćenjem informacije o pokrivenost piksela oblikom može značajno poboljšati preciznost i tačnost ocene rastojanja između dve najudaljenije tačke oblika mereno u datom pravcu. Veoma je poželjno da mera rastojanja između digitalnih slika može da se veže za određenu osobinu oblika i morfološke operacije se koriste prilikom definisanja rastojanja u tu svrhu. Ipak, rastojanja definisana na ovaj način pokazuju se nedovoljno osetljiva na relevantne podatke digitalnih slika koji predstavljaju osobine oblika. U tezi se pokazuje da ideja adaptivne matematičke morfologije može uspešno da se koristi da bi se prevazišao pomenuti problem osetljivosti rastojanja definisanih koristeći morfološke operacije.
Nain, Delphine. "Scale-based decomposable shape representations for medical image segmentation and shape analysis." Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-11192006-184858/.
Full textAaron Bobick, Committee Chair ; Allen Tannenbaum, Committee Co-Chair ; Greg Turk, Committee Member ; Steven Haker, Committee Member ; W. Eric. L. Grimson, Committee Member.
Nyh, Johan. "From Snow White to Frozen : An evaluation of popular gender representation indicators applied to Disney’s princess films." Thesis, Karlstads universitet, Institutionen för geografi, medier och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-36877.
Full textBetyg VG (skala IG-VG)
Drira, Achraf. "Geoacoustic inversion : improvement and extension of the sources image method." Thesis, Brest, 2015. http://www.theses.fr/2015BRES0089/document.
Full textThis thesis aims at analyzing the signals emitted from a spherical omnidirectional source reflected by a stratified sedimentary environment and recorded by a hydrophone array in order to characterize quantitatively the marine sediments at medium frequencies, i.e. between 1 and 10 kHz. The research developed in this manuscript provides a methodology to facilitate the estimation of medium geoacoustic parameters with the image source method, and some appropriate technical solutions to improve this recently developed inversion method. The image source method is based on a physical modeling of the wave reflection emitted from a source by a stratified medium under the Born approximation. As result, the reflection of the wave on the layered medium can be represented by a set of image sources, symmetrical to the real source with respect to the interfaces, whose spatial positions are related to the sound speeds and the thicknesses of the layers. The study consists of two parts : signal processing and inversion of geoacoustic parameters. The first part of the work is focused on the development of the image source method. The original method was based on migration and semblance maps of the recorded signals to determine the input parameters of the inversion algorithm which are travel times and arrival angles. To avoid this step, we propose to determine the travel times with the Teager-Kaiser energy operator (TKEO) and the arrival angles are estimate with a triangulation approach. The inversion model is then integrated, taking into account the possible deformation of the antenna. This part concludes with a new approach that combines TKEO and time-frequency representations in order to have a good estimation of the travel times in the case of noisy signals. For the modeling and geoacoustic inversion part, we propose first an accurate description of the forward model by introducing the concept of virtual image sources. This idea provides a deeper understanding of the developed approach. Then, we propose an extension of the image sources method to the estimation of supplementary geoacoustic parameters : the density, the absorption coefficient, and the shear wave sound speed. This extension is based on the results of the original inversion (estimation of the number of layers, their thicknesses, and the pressure sound speeds) and on the use of the amplitudes of the reflected signals. These improvements and extents of the image source method are illustrated by their applications on both synthetic and real signals, the latter coming from tank and at-sea measurements. The obtained results are very satisfactory, from a computational point of view as well as for the quality of the provided estimations
Rey, Otero Ives. "Anatomy of the SIFT method." Thesis, Cachan, Ecole normale supérieure, 2015. http://www.theses.fr/2015DENS0044/document.
Full textThis dissertation contributes to an in-depth analysis of the SIFT method. SIFT is the most popular and the first efficient image comparison model. SIFT is also the first method to propose a practical scale-space sampling and to put in practice the theoretical scale invariance in scale space. It associates with each image a list of scale invariant (also rotation and translation invariant) features which can be used for comparison with other images. Because after SIFT feature detectors have been used in countless image processing applications, and because of an intimidating number of variants, studying an algorithm that was published more than a decade ago may be surprising. It seems however that not much has been done to really understand this central algorithm and to find out exactly what improvements we can hope for on the matter of reliable image matching methods. Our analysis of the SIFT algorithm is organized as follows. We focus first on the exact computation of the Gaussian scale-space which is at the heart of SIFT as well as most of its competitors. We provide a meticulous dissection of the complex chain of transformations that form the SIFT method and a presentation of every design parameter from the extraction of invariant keypoints to the computation of feature vectors. Using this documented implementation permitting to vary all of its own parameters, we define a rigorous simulation framework to find out if the scale-space features are indeed correctly detected by SIFT, and which sampling parameters influence the stability of extracted keypoints. This analysis is extended to see the influence of other crucial perturbations, such as errors on the amount of blur, aliasing and noise. This analysis demonstrates that, despite the fact that numerous methods claim to outperform the SIFT method, there is in fact limited room for improvement in methods that extract keypoints from a scale-space. The comparison of many detectors proposed in SIFT competitors is the subject of the last part of this thesis. The performance analysis of local feature detectors has been mainly based on the repeatability criterion. We show that this popular criterion is biased toward methods producing redundant (overlapping) descriptors. We therefore propose an amended evaluation metric and use it to revisit a classic benchmark. For the amended repeatability criterion, SIFT is shown to outperform most of its more recent competitors. This last fact corroborates the unabating interest in SIFT and the necessity of a thorough scrutiny of this method
Lang, Heidi. "Understanding the hidden experience of head and neck cancer patients : a qualitative exploration of beliefs and mental images." Thesis, University of Dundee, 2010. https://discovery.dundee.ac.uk/en/studentTheses/c17cd584-34b2-46bc-8290-b8cd3e6ad2c4.
Full textBoháč, Martin. "Zpracování obrazu při určování topografických parametrů povrchů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228823.
Full textNasser, Khalafallah Mahmoud Lamees. "A dictionary-based denoising method toward a robust segmentation of noisy and densely packed nuclei in 3D biological microscopy images." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS283.pdf.
Full textCells are the basic building blocks of all living organisms. All living organisms share life processes such as growth and development, movement, nutrition, excretion, reproduction, respiration and response to the environment. In cell biology research, understanding cells structure and function is essential for developing and testing new drugs. In addition, cell biology research provides a powerful tool to study embryo development. Furthermore, it helps the scientific research community to understand the effects of mutations and various diseases. Time-Lapse Fluorescence Microscopy (TLFM) is one of the most appreciated imaging techniques which can be used in live-cell imaging experiments to quantify various characteristics of cellular processes, i.e., cell survival, proliferation, migration, and differentiation. In TLFM imaging, not only spatial information is acquired, but also temporal information obtained by repeating imaging of a labeled sample at specific time points, as well as spectral information, that produces up to five-dimensional (X, Y, Z + Time + Channel) images. Typically, the generated datasets consist of several (hundreds or thousands) images, each containing hundreds to thousands of objects to be analyzed. To perform high-throughput quantification of cellular processes, nuclei segmentation and tracking should be performed in an automated manner. Nevertheless, nuclei segmentation and tracking are challenging tasks due to embedded noise, intensity inhomogeneity, shape variation as well as a weak boundary of nuclei. Although several nuclei segmentation approaches have been reported in the literature, dealing with embedded noise remains the most challenging part of any segmentation algorithm. We propose a novel 3D denoising algorithm, based on unsupervised dictionary learning and sparse representation, that can both enhance very faint and noisy nuclei, in addition, it simultaneously detects nuclei position accurately. Furthermore, our method is based on a limited number of parameters, with only one being critical, which is the approximate size of the objects of interest. The framework of the proposed method comprises image denoising, nuclei detection, and segmentation. In the denoising step, an initial dictionary is constructed by selecting random patches from the raw image then an iterative technique is implemented to update the dictionary and obtain the final one which is less noisy. Next, a detection map, based on the dictionary coefficients used to denoise the image, is used to detect marker points. Afterward, a thresholding-based approach is proposed to get the segmentation mask. Finally, a marker-controlled watershed approach is used to get the final nuclei segmentation result. We generate 3D synthetic images to study the effect of the few parameters of our method on cell nuclei detection and segmentation, and to understand the overall mechanism for selecting and tuning the significant parameters of the several datasets. These synthetic images have low contrast and low signal to noise ratio. Furthermore, they include touching spheres where these conditions simulate the same characteristics exist in the real datasets. The proposed framework shows that integrating our denoising method along with classical segmentation method works properly in the context of the most challenging cases. To evaluate the performance of the proposed method, two datasets from the cell tracking challenge are extensively tested. Across all datasets, the proposed method achieved very promising results with 96.96% recall for the C.elegans dataset. Besides, in the Drosophila dataset, our method achieved very high recall (99.3%)
Liu, Yuan. "Représentation parcimonieuse basée sur la norme ℓ₀ Mixed integer programming for sparse coding : application to image denoising Incoherent dictionary learning via mixed-integer programming and hybrid augmented Lagrangian." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMIR22.
Full textIn this monograph, we study the exact ℓ₀ based sparse representation problem. For the classical dictionary learning problem, the solution is obtained by iteratively processing two steps: sparse coding and dictionary updating. However, even the problem associated with sparse coding is non-convex and NP-hard. The method for solving this is to reformulate the problem as mixed integer quadratic programming (MIQP). Then by introducing two optimization techniques, initialization by proximal method and relaxation with augmented contraints, the algorithmis greatly speed up (which is thus called AcMIQP) and applied in image denoising, which shows the good performance. Moreover, the classical problem is extended to learn an incoherent dictionary. For dealing with this problem, AcMIQP or proximal method is used for sparse coding. As for dictionary updating, augmented Lagrangian method (ADMM) and extended proximal alternating linearized minimizing method are combined. This exact ℓ₀ based incoherent dictionary learning is applied in image recovery, which illustrates the improved performance with a lower coherence
YANG, MING-CHUN, and 楊明錞. "2D B-string representation and access methods of image database." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/81798406311591577024.
Full textBudinich, Renato. "Adaptive Multiscale Methods for Sparse Image Representation and Dictionary Learning." Doctoral thesis, 2018. http://hdl.handle.net/11858/00-1735-0000-002E-E55B-F.
Full textStein, Gideon P., and Amnon Shashua. "Direct Methods for Estimation of Structure and Motion from Three Views." 1996. http://hdl.handle.net/1721.1/5937.
Full textLi, Kuan-Ying, and 李冠穎. "Representative Images Selection Methods for Video Clips." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/97957554842018606515.
Full text國立臺北科技大學
資訊工程系碩士班
92
When people choose the representative picture in the video, they spend a lot of time and energy. In this thesis, we design two methods for choosing the representative pictures from the video clips automatically. The methods to choose the picture are divided into two kinds of categories in our system: the first kind of method is Well-Selection Representative Images Algorithm, and the second method is Auto-Selection Representative Images Algorithm. In Well-Selection method, users need to input the number of representative picture, then this system begin to analyze the video and divide it into scenes by Scene Changing Detection mechanism. After that, this system makes a spatial and temporal characteristics analysis of scenes, utilizes the Key Scene Allocation to find out the key scenes in accordance with user''s input number and use Key Frame Extraction Algorithm to extract representative pictures from the key scenes. In Auto-Selection method, users do not need to input the number of selected pictures; this system extract representative pictures of the video automatically using scene changing detection and Key Frame Extraction Algorithm. Users can watch pictures and figure out the whole story in the video. This system is tested via many types of video, the result is quite satisfactory. The user can use this system to extract video into representative pictures, and share with relatives and friends.
Babu, T. Ravindra. "Large Data Clustering And Classification Schemes For Data Mining." Thesis, 2006. http://hdl.handle.net/2005/440.
Full text"Sparse Methods in Image Understanding and Computer Vision." Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.17719.
Full textDissertation/Thesis
Ph.D. Electrical Engineering 2013
Berkels, Benjamin [Verfasser]. "Joint methods in imaging based on diffuse image representations / vorgelegt von Benjamin Berkels." 2010. http://d-nb.info/1008748250/34.
Full textFuchs, Martin [Verfasser]. "Advanced methods for relightable scene representations in image space / vorgelegt von Martin Fuchs." 2008. http://d-nb.info/996233679/34.
Full text(9187466), Bharath Kumar Comandur Jagannathan Raghunathan. "Semantic Labeling of Large Geographic Areas Using Multi-Date and Multi-View Satellite Images and Noisy OpenStreetMap Labels." Thesis, 2020.
Find full text(6630578), Yellamraju Tarun. "n-TARP: A Random Projection based Method for Supervised and Unsupervised Machine Learning in High-dimensions with Application to Educational Data Analysis." Thesis, 2019.
Find full text(11184732), Kumar Apurv. "E-scooter Rider Detection System in Driving Environments." Thesis, 2021.
Find full textZhu, Jihai. "Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand." 2007. http://hdl.handle.net/10179/704.
Full text