To see the other types of publications on this topic, follow the link: Matching points.

Dissertations / Theses on the topic 'Matching points'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Matching points.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Avdiu, Blerta. "Matching Feature Points in 3D World." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Data- och elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23049.

Full text
Abstract:
This thesis work deals with the most actual topic in Computer Vision field which is scene understanding and this using matching of 3D feature point images. The objective is to make use of Saab’s latest breakthrough in extraction of 3D feature points, to identify the best alignment of at least two 3D feature point images. The thesis gives a theoretical overview of the latest algorithms used for feature detection, description and matching. The work continues with a brief description of the simultaneous localization and mapping (SLAM) technique, ending with a case study on evaluation of the newly developed software solution for SLAM, called slam6d. Slam6d is a tool that registers point clouds into a common coordinate system. It does an automatic high-accurate registration of the laser scans. In the case study the use of slam6d is extended in registering 3D feature point images extracted from a stereo camera and the results of registration are analyzed. In the case study we start with registration of one single 3D feature point image captured from stationary image sensor continuing with registration of multiple images following a trail. Finally the conclusion from the case study results is that slam6d can register non-laser scan extracted feature point images with high-accuracy in case of single image but it introduces some overlapping results in the case of multiple images following a trail.
APA, Harvard, Vancouver, ISO, and other styles
2

Klein, Oliver [Verfasser]. "Shape Matching With Reference Points / Oliver Klein." Berlin : Freie Universität Berlin, 2008. http://d-nb.info/1023050862/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stanton, Kevin Blythe. "Matching Points to Lines: Sonar-based Localization for the PSUBOT." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4630.

Full text
Abstract:
The PSUBOT (pronounced pea-es-you-bought) is an autonomous wheelchair robot for persons with certain disabilities. Its use of voice recognition and autonomous navigation enable it to carry out high level commands with little or no user assistance. We first describe the goals, constraints, and capabilities of the overall system including path planning and obstacle avoidance. We then focus on localization-the ability of the robot to locate itself in space. Odometry, a compass, and an algorithm which matches points to lines are each employed to accomplish this task. The matching algorithm (which matches "points" to "lines") is the main contribution to this work. The .. points" are acquired from a rotating sonar device, and the "lines" are extracted from a user-entered line-segment model of the building. The algorithm assumes that only small corrections are necessary to correct for odometry errors which inherently accumulate, and makes a correction by shifting and rotating the sonar image so that the data points are as close as possible to the lines. A modification of the basic algorithm to accommodate parallel lines was developed as well as an improvement to the basic noise removal algorithm. We found that the matching algorithm was able to determine the location of the robot to within one foot even when required to correct for as many as five feet of simulated odometry error. Finally, the algorithm's complexity was found to be well within the processing power of currently available hardware.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Chih-Lin. "Propensity Score Matching in Observational Studies with Multiple Time Points." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1313420291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mellado, Nicolas. "Analysis of 3D objects at multiple scales : application to shape matching." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14685/document.

Full text
Abstract:
Depuis quelques années, l’évolution des techniques d’acquisition a entraîné une généralisation de l’utilisation d’objets 3D très dense, représentés par des nuages de points de plusieurs millions de sommets. Au vu de la complexité de ces données, il est souvent nécessaire de les analyser pour en extraire les structures les plus pertinentes, potentiellement définies à plusieurs échelles. Parmi les nombreuses méthodes traditionnellement utilisées pour analyser des signaux numériques, l’analyse dite scale-space est aujourd’hui un standard pour l’étude des courbes et des images. Cependant, son adaptation aux données 3D pose des problèmes d’instabilité et nécessite une information de connectivité, qui n’est pas directement définie dans les cas des nuages de points. Dans cette thèse, nous présentons une suite d’outils mathématiques pour l’analyse des objets 3D, sous le nom de Growing Least Squares (GLS). Nous proposons de représenter la géométrie décrite par un nuage de points via une primitive du second ordre ajustée par une minimisation aux moindres carrés, et cela à pour plusieurs échelles. Cette description est ensuite derivée analytiquement pour extraire de manière continue les structures les plus pertinentes à la fois en espace et en échelle. Nous montrons par plusieurs exemples et comparaisons que cette représentation et les outils associés définissent une solution efficace pour l’analyse des nuages de points à plusieurs échelles. Un défi intéressant est l’analyse d’objets 3D acquis dans le cadre de l’étude du patrimoine culturel. Dans cette thèse, nous nous étudions les données générées par l’acquisition des fragments des statues entourant par le passé le Phare d’Alexandrie, Septième Merveille du Monde. Plus précisément, nous nous intéressons au réassemblage d’objets fracturés en peu de fragments (une dizaine), mais avec de nombreuses parties manquantes ou fortement dégradées par l’action du temps. Nous proposons un formalisme pour la conception de systèmes d’assemblage virtuel semi-automatiques, permettant de combiner à la fois les connaissances des archéologues et la précision des algorithmes d’assemblage. Nous présentons deux systèmes basés sur cette conception, et nous montrons leur efficacité dans des cas concrets
Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Amongthe wide variety of methods proposed to analyze digital signals, the scale-space analysis istoday a standard for the study of 2D curves and images. However, its adaptation to 3D dataleads to instabilities and requires connectivity information, which is not directly availablewhen dealing with point sets.In this thesis, we present a new multi-scale analysis framework that we call the GrowingLeast Squares (GLS). It consists of a robust local geometric descriptor that can be evaluatedon point sets at multiple scales using an efficient second-order fitting procedure. We proposeto analytically differentiate this descriptor to extract continuously the pertinent structuresin scale-space. We show that this representation and the associated toolbox define an effi-cient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.A challenging application is the analysis of acquired 3D objects coming from the CulturalHeritage field. In this thesis, we study a real-world dataset composed of the fragments ofthe statues that were surrounding the legendary Alexandria Lighthouse. In particular, wefocus on the problem of fractured object reassembly, consisting of few fragments (up to aboutten), but with missing parts due to erosion or deterioration. We propose a semi-automaticformalism to combine both the archaeologist’s knowledge and the accuracy of geometricmatching algorithms during the reassembly process. We use it to design two systems, andwe show their efficiency in concrete cases
APA, Harvard, Vancouver, ISO, and other styles
6

RAVEENDIRAN, JAYANTHAN. "FAST ESTIMATION OF DENSE DISPARITY MAP USING PIVOT POINTS." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/theses/1208.

Full text
Abstract:
In this thesis, a novel and fast method to compute the dense disparity map of a stereo pair of images is presented. Most of the current stereo matching algorithms are ill suited for real-time matching owing to their time complexity. Methods that concentrate on providing a real-time performance, sacrifice much in accuracy. The presented method, Fast Estimation of Dense Disparity Map Using Pivot Points (FEDDUP), uses a hierarchical approach towards reduction of search space to find the correspondences. The hierarchy starts with a set of points and then it moves on to a mesh with which the edge pixels are matched. This results in a semi-global disparity map. The semi global disparity map is then used as a soft constraint to find the correspondences of the remaining points. This process delivers good real-time performance with promising accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jue. "Modeling and Matching of Landmarks for Automation of Mars Rover Localization." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1213192082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Palomares, Jean-Louis. "Une nouvelle méthode d’appariement de points d’intérêt pour la mise en correspondance d’images." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20075/document.

Full text
Abstract:
Ce mémoire de thèse traite de la mise en correspondance d'images pour des applications de vision stéréoscopique ou de stabilisation d'images de caméras vidéo. les méthodes de mise en correspondance reposent généralement sur l'utilisation de points d'intérêts dans les images, c'est-à-dire de points qui présentent de fortes discontinuités d'intensité lumineuse. Nous présentons tout d'abord un nouveau descripteur de points d'intérêt, obtenu au moyen d'un filtre anisotropique rotatif qui délivre en chaque point d'intérêt une signature mono-dimensionnelle basée sur un gradient d'intensité. Invariant à la rotationpar construction, ce descripteur possède de trés bonnes propriétés de robustesse et de discrimination. Nous proposons ensuite une nouvelle méthode d'appariement invariante aux transformations euclidiennes et affines. Cette méthode exploite la corrélation des signatures sous l'hypothèse de faibles déformations, et définit une mesure de distance nécessaire à l'appariement de points. Les résultats obtenus sur des images difficiles laissent envisager des prolongements prometteurs de cette méthode
This thesis adresses the issue of image matching for stereoscopic vison applications and image stabilization of video cameras. Methods of mapping are generally based on the use of interest points in the images, i.e. of points which have strong discontinuities in light intensity. We first present a new descriptor of points of interest, obtained by means of an anisotropic rotary filter which delivers at each point of interest a one-dimensional signature based on an intensity gradient. Invariant to rotation by construction, thisdescriptor has very good properties of robustness and discrimination. We then propose a new matching method invariant to Euclidean and affine transformations. This method exploits the correlation of the signatures subject to moderate warping, and defines a distance measure, necesssary for the matching of points. the results obtained on difficult images augur promising extentions to this method
APA, Harvard, Vancouver, ISO, and other styles
9

Stefanik, Kevin Vincent. "Sequential Motion Estimation and Refinement for Applications of Real-time Reconstruction from Stereo Vision." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/76802.

Full text
Abstract:
This paper presents a new approach to the feature-matching problem for 3D reconstruction by taking advantage of GPS and IMU data, along with a prior calibrated stereo camera system. It is expected that pose estimates and calibration can be used to increase feature matching speed and accuracy. Given pose estimates of cameras and extracted features from images, the algorithm first enumerates feature matches based on stereo projection constraints in 2D and then backprojects them to 3D. Then, a grid search algorithm over potential camera poses is proposed to match the 3D features and find the largest group of 3D feature matches between pairs of stereo frames. This approach will provide pose accuracy to within the space that each grid region covers. Further refinement of relative camera poses is performed with an iteratively re-weighted least squares (IRLS) method in order to reject outliers in the 3D matches. The algorithm is shown to be capable of running in real-time correctly, where the majority of processing time is taken by feature extraction and description. The method is shown to outperform standard open source software for reconstruction from imagery.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Liming. "Recalage robuste à base de motifs de points pseudo aléatoires pour la réalité augmentée." Thesis, Ecole centrale de Nantes, 2016. http://www.theses.fr/2016ECDN0025.

Full text
Abstract:
La Réalité Augmentée (RA) vise à afficher des informations numériques virtuelles sur des images réelles. Le recalage est important, puisqu’il permet d'aligner correctement les objets virtuels dans le monde réel. Contrairement au tracking qui recale en utilisant les informations de l’image précédente, la localisation à grande échelle (wide baseline localization) calcule la solution en utilisant uniquement les informations présentes dans l’image courante. Il permet ainsi de trouver des solutions initiales au problème de recalage (initialisation) et, n’est pas sujet aux problèmes de « perte de tracking ». Le problème du recalage en RA est relativement bien étudié dans la littérature, mais les méthodes existantes fonctionnent principalement lorsque la scène augmentée présente des textures. Pourtant, pour le recalage avec les objets peu ou pas texturés, il est possible d’utiliser leurs informations géométriques qui représentent des caractéristiques plus stables que les textures. Cette thèse s’attache au problème de recalage basé sur des informations géométriques, et plus précisément sur les points. Nous proposons deux nouvelles méthodes de recalage de points (RRDM et LGC) robustes et rapides. LGC est une amélioration de la méthode RRDM et peut mettre en correspondance des ensembles de motifs de points 2D ou 3D subissant une transformation dont le type est connu. LGC présente un comportement linéaire en fonction du nombre de points, ce qui permet un tracking en temps-réel. La pertinence de LGC a été illustrée en développant une application de calibration de système projecteur-caméra dont les résultats sont comparables avec l’état de l’art tout en présentant des avantages pour l’utilisateur en termes de taille de mire de calibration
Registration is a very important task in Augmented Reality (AR). It provides the spatial alignment between the real environment and virtual objects. Unlike tracking (which relies on previous frame information), wide baseline localization finds the correct solution from a wide search space, so as to overcome the initialization or tracking failure problems. Nowadays, various wide baseline localization methods have been applied successfully. But for objects with no or little texture, there is still no promising method. One possible solution is to rely on the geometric information, which sometimes does not vary as much as texture or color. This dissertation focuses on new wide baseline localization methods entirely based on geometric information, and more specifically on points. I propose two novel point pattern matching algorithms, RRDM and LGC. Especially, LGC registers 2D or 3D point patterns under any known transformation type and supports multipattern recognitions. It has a linear behavior with respect to the number of points, which allows for real-time tracking. It is applied to multi targets tracking and augmentation, as well as to 3D model registration. A practical method for projector-camera system calibration based on LGC is also proposed. It can be useful for large scale Spatial Augmented Reality (SAR). Besides, I also developed a method to estimate the rotation axis of surface of revolution quickly and precisely on 3D data. It is integrated in a novel framework to reconstruct the surface of revolution on dense SLAM in real-time
APA, Harvard, Vancouver, ISO, and other styles
11

El, Sayed Abdul Rahman. "Traitement des objets 3D et images par les méthodes numériques sur graphes." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMLH19/document.

Full text
Abstract:
La détection de peau consiste à détecter les pixels correspondant à une peau humaine dans une image couleur. Les visages constituent une catégorie de stimulus importante par la richesse des informations qu’ils véhiculent car avant de reconnaître n’importe quelle personne il est indispensable de localiser et reconnaître son visage. La plupart des applications liées à la sécurité et à la biométrie reposent sur la détection de régions de peau telles que la détection de visages, le filtrage d'objets 3D pour adultes et la reconnaissance de gestes. En outre, la détection de la saillance des mailles 3D est une phase de prétraitement importante pour de nombreuses applications de vision par ordinateur. La segmentation d'objets 3D basée sur des régions saillantes a été largement utilisée dans de nombreuses applications de vision par ordinateur telles que la correspondance de formes 3D, les alignements d'objets, le lissage de nuages de points 3D, la recherche des images sur le web, l’indexation des images par le contenu, la segmentation de la vidéo et la détection et la reconnaissance de visages. La détection de peau est une tâche très difficile pour différentes raisons liées en général à la variabilité de la forme et la couleur à détecter (teintes différentes d’une personne à une autre, orientation et tailles quelconques, conditions d’éclairage) et surtout pour les images issues du web capturées sous différentes conditions de lumière. Il existe plusieurs approches connues pour la détection de peau : les approches basées sur la géométrie et l’extraction de traits caractéristiques, les approches basées sur le mouvement (la soustraction de l’arrière-plan (SAP), différence entre deux images consécutives, calcul du flot optique) et les approches basées sur la couleur. Dans cette thèse, nous proposons des méthodes d'optimisation numérique pour la détection de régions de couleurs de peaux et de régions saillantes sur des maillages 3D et des nuages de points 3D en utilisant un graphe pondéré. En se basant sur ces méthodes, nous proposons des approches de détection de visage 3D à l'aide de la programmation linéaire et de fouille de données (Data Mining). En outre, nous avons adapté nos méthodes proposées pour résoudre le problème de la simplification des nuages de points 3D et de la correspondance des objets 3D. En plus, nous montrons la robustesse et l’efficacité de nos méthodes proposées à travers de différents résultats expérimentaux réalisés. Enfin, nous montrons la stabilité et la robustesse de nos méthodes par rapport au bruit
Skin detection involves detecting pixels corresponding to human skin in a color image. The faces constitute a category of stimulus important by the wealth of information that they convey because before recognizing any person it is essential to locate and recognize his face. Most security and biometrics applications rely on the detection of skin regions such as face detection, 3D adult object filtering, and gesture recognition. In addition, saliency detection of 3D mesh is an important pretreatment phase for many computer vision applications. 3D segmentation based on salient regions has been widely used in many computer vision applications such as 3D shape matching, object alignments, 3D point-point smoothing, searching images on the web, image indexing by content, video segmentation and face detection and recognition. The detection of skin is a very difficult task for various reasons generally related to the variability of the shape and the color to be detected (different hues from one person to another, orientation and different sizes, lighting conditions) and especially for images from the web captured under different light conditions. There are several known approaches to skin detection: approaches based on geometry and feature extraction, motion-based approaches (background subtraction (SAP), difference between two consecutive images, optical flow calculation) and color-based approaches. In this thesis, we propose numerical optimization methods for the detection of skins color and salient regions on 3D meshes and 3D point clouds using a weighted graph. Based on these methods, we provide 3D face detection approaches using Linear Programming and Data Mining. In addition, we adapted our proposed methods to solve the problem of simplifying 3D point clouds and matching 3D objects. In addition, we show the robustness and efficiency of our proposed methods through different experimental results. Finally, we show the stability and robustness of our methods with respect to noise
APA, Harvard, Vancouver, ISO, and other styles
12

Káňa, David. "Využití obecně orientovaných snímků v geoinformatice." Doctoral thesis, Vysoké učení technické v Brně. Fakulta stavební, 2013. http://www.nusl.cz/ntk/nusl-392292.

Full text
Abstract:
This thesis deals with methods and algorithms used in computer vision for fully automatic reconstruction of exterior orientation in ordered and unordered sets of images captured by digital calibrated cameras without prior informations about camera positions or scene structure. Existing methods for key points detection, matching and relative orientation of images are described. Methods and strategies for merging submodels into global reconstruction including complex bundle adjustment are proposed. This thesis also adresses issues of direct and indirect georeferencing of images and orthophoto production problems. An outline related to technology of the capturing images by multiple camera systems is given and possible usage of oblique images is described, especially technology of the automatic 3D models texturing and measurements in one image using restrictive geometric conditions.
APA, Harvard, Vancouver, ISO, and other styles
13

Rasheed, Ali Suad. "Economics Of Carbon Dioxide Sequestration In A Mature Oil Field." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610177/index.pdf.

Full text
Abstract:
To meet the goal of atmospheric stabilization of carbon dioxide (CO2 ) levels a technological transformation should occur in the energy sector. One strategy to achieve this is carbon sequestration. Carbon dioxide can be captured from industrial sources and sequestered underground into depleted oil and gas reservoirs. CO2 injected into geological formations, such as mature oil reservoirs can be effectively trapped by hydrodynamical (structural), solution, residual (capillary) and mineral trapping methods. In this work, a case study was conducted using CMG-STARS software for CO2 sequestration in a mature oil field. History matching was done with the available production, bottom hole pressures and water cut data to compare the results obtained from the simulator with the field data. Next, previously developed optimization methods were modified and used for the case of study. The main object of the optimization was to determine the optimal location, number of injection wells, injection rate, injection depth and pressure of wells to maximize the total trapped amount of CO2 while enhancing the amount of oil recovered. A second round of simulations was carried out to study the factors that affect the total oil recovery and CO2 ¬
storage amount. These include relative permeability end points effect, hysteresis effect, fracture spacing and additives of simultaneous injection of carbon dioxide with CO and H2S. Optimization runs were carried out on a mildly heterogeneous 3D model for variety of cases. When compared with the base case, the optimized case led to an increase of 20% in the amount of oil that is recovered
and more than 95% of the injected CO2 was trapped as solution gas on and as an immobile gas. Finally, an investigation of the economical feasibility was accomplished. NPV values for various cases were obtained, selected and studied yielding in a number of cases that are found to be applicable for the field of concern.
APA, Harvard, Vancouver, ISO, and other styles
14

Mäkinen, Veli. "Parameterized approximate string matching and local-similarity-based point-pattern matching." Helsinki : University of Helsinki, 2003. http://ethesis.helsinki.fi/julkaisut/mat/tieto/vk/makinen/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bayram, Ilker. "Interest Point Matching Across Arbitrary Views." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605114/index.pdf.

Full text
Abstract:
Making a computer &lsquo
see&rsquo
is certainly one of the greatest challanges for today. Apart from possible applications, the solution may also shed light or at least give some idea on how, actually, the biological vision works. Many problems faced en route to successful algorithms require finding corresponding tokens in different views, which is termed the correspondence problem. For instance, given two images of the same scene from different views, if the camera positions and their internal parameters are known, it is possible to obtain the 3-Dimensional coordinates of a point in space, relative to the cameras, if the same point may be located in both images. Interestingly, the camera positions and internal parameters may be extracted solely from the images if a sufficient number of corresponding tokens can be found. In this sense, two subproblems, as the choice of the tokens and how to match these tokens, are examined. Due to the arbitrariness of the image pairs, invariant schemes for extracting and matching interest points, which were taken as the tokens to be matched, are utilised. In order to appreciate the ideas of the mentioned schemes, topics as scale-space, rotational and affine invariants are introduced. The geometry of the problem is briefly reviewed and the epipolar constraint is imposed using statistical outlier rejection methods. Despite the satisfactory matching performance of simple correlation-based matching schemes on small-baseline pairs, the simulation results show the improvements when the mentioned invariants are used on the cases for which they are strictly necessary.
APA, Harvard, Vancouver, ISO, and other styles
16

Sze, Wui-fung. "Robust feature-point based image matching." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37153262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sze, Wui-fung, and 施會豐. "Robust feature-point based image matching." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37153262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Caetano, Tiberio Silva. "Graphical models and point set matching." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2004. http://hdl.handle.net/10183/4041.

Full text
Abstract:
Casamento de padrões de pontos em Espaços Euclidianos é um dos problemas fundamentais em reconhecimento de padrões, tendo aplicações que vão desde Visão Computacional até Química Computacional. Sempre que dois padrões complexos estão codi- ficados em termos de dois conjuntos de pontos que identificam suas características fundamentais, sua comparação pode ser vista como um problema de casamento de padrões de pontos. Este trabalho propõe uma abordagem unificada para os problemas de casamento exato e inexato de padrões de pontos em Espaços Euclidianos de dimensão arbitrária. No caso de casamento exato, é garantida a obtenção de uma solução ótima. Para casamento inexato (quando ruído está presente), resultados experimentais confirmam a validade da abordagem. Inicialmente, considera-se o problema de casamento de padrões de pontos como um problema de casamento de grafos ponderados. O problema de casamento de grafos ponderados é então formulado como um problema de inferência Bayesiana em um modelo gráfico probabilístico. Ao explorar certos vínculos fundamentais existentes em padrões de pontos imersos em Espaços Euclidianos, provamos que, para o casamento exato de padrões de pontos, um modelo gráfico simples é equivalente ao modelo completo. É possível mostrar que inferência probabilística exata neste modelo simples tem complexidade polinomial para qualquer dimensionalidade do Espaço Euclidiano em consideração. Experimentos computacionais comparando esta técnica com a bem conhecida baseada em relaxamento probabilístico evidenciam uma melhora significativa de desempenho para casamento inexato de padrões de pontos. A abordagem proposta é signi- ficativamente mais robusta diante do aumento do tamanho dos padrões envolvidos. Na ausência de ruído, os resultados são sempre perfeitos.
Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.
APA, Harvard, Vancouver, ISO, and other styles
19

Arbouche, Samir. "Feature point correspondences, a matching constraints survey." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0017/MQ48126.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rey, Otero Ives. "Anatomy of the SIFT method." Thesis, Cachan, Ecole normale supérieure, 2015. http://www.theses.fr/2015DENS0044/document.

Full text
Abstract:
Cette thèse est une analyse approfondie de la méthode SIFT, la méthode de comparaison d'images la plus populaire. En proposant un échantillonnage du scale-space Gaussien, elle est aussi la première méthode à mettre en pratique la théorie scale-space et faire usage de ses propriétés d'invariance aux changements d'échelles.SIFT associe à une image un ensemble de descripteurs invariants aux changements d'échelle, invariants à la rotation et à la translation. Les descripteurs de différentes images peuvent être comparés afin de mettre en correspondance les images. Compte tenu de ses nombreuses applications et ses innombrables variantes, étudier un algorithme publié il y a une décennie pourrait surprendre. Il apparaît néanmoins que peu a été fait pour réellement comprendre cet algorithme majeur et établir de façon rigoureuse dans quelle mesure il peut être amélioré pour des applications de haute précision. Cette étude se découpe en quatre parties. Le calcul exact du scale-space Gaussien, qui est au cœur de la méthode SIFT et de la plupart de ses compétiteurs, est l'objet de la première partie.La deuxième partie est une dissection méticuleuse de la longue chaîne de transformations qui constitue la méthode SIFT. Chaque paramètre y est documenté et son influence analysée. Cette dissection est aussi associé à une publication en ligne de l'algorithme. La description détaillée s'accompagne d'un code en C ainsi que d'une plateforme de démonstration permettant l'analyse par le lecteur de l'influence de chaque paramètre. Dans la troisième partie, nous définissons un cadre d'analyse expérimental exact dans le but de vérifier que la méthode SIFT détecte de façon fiable et stable les extrema du scale-space continue à partir de la grille discrète. En découlent des conclusions pratiques sur le bon échantillonnage du scale-space Gaussien ainsi que sur les stratégies de filtrage de points instables. Ce même cadre expérimental est utilisé dans l'analyse de l'influence de perturbations dans l'image (aliasing, bruit, flou). Cette analyse démontre que la marge d'amélioration est réduite pour la méthode SIFT ainsi que pour toutes ses variantes s'appuyant sur le scale-space pour extraire des points d'intérêt. L'analyse démontre qu'un suréchantillonnage du scale-space permet d'améliorer l'extraction d'extrema et que se restreindre aux échelles élevées améliore la robustesse aux perturbations de l'image.La dernière partie porte sur l'évaluation des performances de détecteurs de points. La métrique de performance la plus généralement utilisée est la répétabilité. Nous démontrons que cette métrique souffre pourtant d'un biais et qu'elle favorise les méthodes générant des détections redondantes. Afin d'éliminer ce biais, nous proposons une variante qui prend en considération la répartition spatiale des détections. A l'aide de cette correction nous réévaluons l'état de l'art et montrons que, une fois la redondance des détections prise en compte, la méthode SIFT est meilleure que nombre de ses variantes les plus modernes
This dissertation contributes to an in-depth analysis of the SIFT method. SIFT is the most popular and the first efficient image comparison model. SIFT is also the first method to propose a practical scale-space sampling and to put in practice the theoretical scale invariance in scale space. It associates with each image a list of scale invariant (also rotation and translation invariant) features which can be used for comparison with other images. Because after SIFT feature detectors have been used in countless image processing applications, and because of an intimidating number of variants, studying an algorithm that was published more than a decade ago may be surprising. It seems however that not much has been done to really understand this central algorithm and to find out exactly what improvements we can hope for on the matter of reliable image matching methods. Our analysis of the SIFT algorithm is organized as follows. We focus first on the exact computation of the Gaussian scale-space which is at the heart of SIFT as well as most of its competitors. We provide a meticulous dissection of the complex chain of transformations that form the SIFT method and a presentation of every design parameter from the extraction of invariant keypoints to the computation of feature vectors. Using this documented implementation permitting to vary all of its own parameters, we define a rigorous simulation framework to find out if the scale-space features are indeed correctly detected by SIFT, and which sampling parameters influence the stability of extracted keypoints. This analysis is extended to see the influence of other crucial perturbations, such as errors on the amount of blur, aliasing and noise. This analysis demonstrates that, despite the fact that numerous methods claim to outperform the SIFT method, there is in fact limited room for improvement in methods that extract keypoints from a scale-space. The comparison of many detectors proposed in SIFT competitors is the subject of the last part of this thesis. The performance analysis of local feature detectors has been mainly based on the repeatability criterion. We show that this popular criterion is biased toward methods producing redundant (overlapping) descriptors. We therefore propose an amended evaluation metric and use it to revisit a classic benchmark. For the amended repeatability criterion, SIFT is shown to outperform most of its more recent competitors. This last fact corroborates the unabating interest in SIFT and the necessity of a thorough scrutiny of this method
APA, Harvard, Vancouver, ISO, and other styles
21

Stančík, Petr. "Optoelektronické a fotogrammetrické měřicí systémy." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-233413.

Full text
Abstract:
Dissertation deals with analysis and design of optoelectronic and photogrammetric measuring systems. Specific design of optoelectronic contactless flat object area meters with analysis of attainable measurement accuracy is described. Next part is dedicated to stereophotogrammetry - principles of 3D reconstructions, methods of camera self-calibration and matching points in images are described. The analysis of attainable accuracy of monitored parameters is discussed too. Finally, the test program with implemented described routines is introduced. This test program enables practical aplication of stereophotogrammetric system for taking spatial coordinates of 3D objects.
APA, Harvard, Vancouver, ISO, and other styles
22

Ben, Abdallah Hamdi. "Inspection d'assemblages aéronautiques par vision 2D/3D en exploitant la maquette numérique et la pose estimée en temps réel Three-dimensional point cloud analysis for automatic inspection of complex aeronautical mechanical assemblies Automatic inspection of aeronautical mechanical assemblies by matching the 3D CAD model and real 2D images." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2020. http://www.theses.fr/2020EMAC0001.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte du développement d'outils numériques innovants au service de ce qui est communément désigné par Usine du Futur. Nos travaux de recherche ont été menés dans le cadre du laboratoire de recherche commun "Inspection 4.0" entre IMT Mines Albi/ICA et la Sté DIOTA spécialisée dans le développement d'outils numériques pour l'Industrie 4.0. Dans cette thèse, nous nous sommes intéressés au développement de systèmes exploitant des images 2D ou des nuages de points 3D pour l'inspection automatique d'assemblages mécaniques aéronautiques complexes (typiquement un moteur d'avion). Nous disposons du modèle CAO de l'assemblage (aussi désigné par maquette numérique) et il s'agit de vérifier que l'assemblage a été correctement assemblé, i.e que tous les éléments constituant l'assemblage sont présents, dans la bonne position et à la bonne place. La maquette numérique sert de référence. Nous avons développé deux scénarios d'inspection qui exploitent les moyens d'inspection développés par DIOTA : (1) un scénario basé sur une tablette équipée d'une caméra, portée par un opérateur pour un contrôle interactif temps-réel, (2) un scénario basé sur un robot équipé de capteurs (deux caméras et un scanner 3D) pour un contrôle totalement automatique. Dans les deux scénarios, une caméra dite de localisation fournit en temps-réel la pose entre le modèle CAO et les capteurs mis en œuvre (ce qui permet de relier directement la maquette numérique 3D avec les images 2D ou les nuages de points 3D analysés). Nous avons d'abord développé des méthodes d'inspection 2D, basées uniquement sur l'analyse d'images 2D puis, pour certains types d'inspection qui ne pouvaient pas être réalisés à partir d'images 2D (typiquement nécessitant la mesure de distances 3D), nous avons développé des méthodes d'inspection 3D basées sur l'analyse de nuages de points 3D. Pour l'inspection 3D de câbles électriques présents au sein de l'assemblage, nous avons proposé une méthode originale de segmentation 3D des câbles. Nous avons aussi traité la problématique de choix automatique de point de vue qui permet de positionner le capteur d'inspection dans une position d'observation optimale. Les méthodes développées ont été validées sur de nombreux cas industriels. Certains des algorithmes d’inspection développés durant cette thèse ont été intégrés dans le logiciel DIOTA Inspect© et sont utilisés quotidiennement chez les clients de DIOTA pour réaliser des inspections sur site industriel
This thesis makes part of a research aimed towards innovative digital tools for the service of what is commonly referred to as Factory of the Future. Our work was conducted in the scope of the joint research laboratory "Inspection 4.0" founded by IMT Mines Albi/ICA and the company DIOTA specialized in the development of numerical tools for Industry 4.0. In the thesis, we were interested in the development of systems exploiting 2D images or (and) 3D point clouds for the automatic inspection of complex aeronautical mechanical assemblies (typically an aircraft engine). The CAD (Computer Aided Design) model of the assembly is at our disposal and our task is to verify that the assembly has been correctly assembled, i.e. that all the elements constituting the assembly are present in the right position and at the right place. The CAD model serves as a reference. We have developed two inspection scenarios that exploit the inspection systems designed and implemented by DIOTA: (1) a scenario based on a tablet equipped with a camera, carried by a human operator for real-time interactive control, (2) a scenario based on a robot equipped with sensors (two cameras and a 3D scanner) for fully automatic control. In both scenarios, a so-called localisation camera provides in real-time the pose between the CAD model and the sensors (which allows to directly link the 3D digital model with the 2D images or the 3D point clouds analysed). We first developed 2D inspection methods, based solely on the analysis of 2D images. Then, for certain types of inspection that could not be performed by using 2D images only (typically requiring the measurement of 3D distances), we developed 3D inspection methods based on the analysis of 3D point clouds. For the 3D inspection of electrical cables, we proposed an original method for segmenting a cable within a point cloud. We have also tackled the problem of automatic selection of best view point, which allows the inspection sensor to be placed in an optimal observation position. The developed methods have been validated on many industrial cases. Some of the inspection algorithms developed during this thesis have been integrated into the DIOTA Inspect© software and are used daily by DIOTA's customers to perform inspections on industrial sites
APA, Harvard, Vancouver, ISO, and other styles
23

Ye, Jiacheng. "Computing Exact Bottleneck Distance on Random Point Sets." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98669.

Full text
Abstract:
Given a complete bipartite graph on two sets of points containing n points each, in a bottleneck matching problem, we want to find an one-to-one correspondence, also called a matching, that minimizes the length of its largest edge; the length of an edge is simply the Euclidean distance between its end-points. As an application, consider matching taxis to requests while minimizing the largest distance between any request to its matched taxi. The length of the largest edge (also called the bottleneck distance) has numerous applications in machine learning as well as topological data analysis. One can use the classical Hopcroft-Karp (HK-) Algorithm to find the bottleneck matching. In this thesis, we consider the case where A and B are points that are generated uniformly at random from a unit square. Instead of the classical HK-Algorithm, we implement and empirically analyze a new algorithm by Lahn and Raghvendra (Symposium on Computational Geometry, 2019). Our experiments show that our approach outperforms the HK-Algorithm based approach for computing bottleneck matching.
Master of Science
Consider the problem of matching taxis to an equal number of requests. While matching them, one objective is to minimize the largest distance between a request and its match. Finding such a matching is called the bottleneck matching problem. In addition, this optimization problem arises in topological data analysis as well as machine learning. In this thesis, I conduct an empirical analysis of a new algorithm, which is called the FAST-MATCH algorithm, to find the bottleneck matching. I find that, when a large input data is randomly generated from a unit square, the FAST-MATCH algorithm performs substantially faster than the classical methods
APA, Harvard, Vancouver, ISO, and other styles
24

Guo, Hongyu. "Diffeomorphic point matching with applications in medical image analysis." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Staniaszek, Michal. "Feature-Feature Matching For Object Retrieval in Point Clouds." Thesis, KTH, Datorseende och robotik, CVAP, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170475.

Full text
Abstract:
In this project, we implement a system for retrieving instances of objects from point clouds using feature based matching techniques. The target dataset of point clouds consists of approximately 80 full scans of office rooms over a period of one month. The raw clouds are reprocessed to remove regions which are unlikely to contain objects. Using locations determined by one of several possible interest point selection methods, one of a number of descriptors is extracted from the processed clouds. Descriptors from a target cloud are compared to those from a query object using a nearest neighbour approach. The nearest neighbours of each descriptor in the query cloud are used to vote for the position of the object in a 3D grid overlaid on the room cloud. We apply clustering in the voting space and rank the clusters according to the number of votes they contain. The centroid of each of the clusters is used to extract a region from the target cloud which, in the ideal case, corresponds to the query object. We perform an experimental evaluation of the system using various parameter settings in order to investigate factors affecting the usability of the system, and the efficacy of the system in retrieving correct objects. In the best case, we retrieve approximately 50% of the matching objects in the dataset. In the worst case, we retrieve only 10%. We find that the best approach is to use a uniform sampling over the room clouds, and to use a descriptor which factors in both colour and shape information to describe points.
APA, Harvard, Vancouver, ISO, and other styles
26

McReynolds, Daniel Peter. "Rigidity checking for matching 3D point correspondences under perspective projection." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq25114.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Jian, and 张简. "Image point matching in multiple-view object reconstruction from imagesequences." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48079856.

Full text
Abstract:
This thesis is concerned with three-dimensional (3D) reconstruction and point registration, which are fundamental topics of numerous applications in the area of computer vision. First, we propose the multiple epipolar lines (MEL) shape recovery method for 3D reconstruction from an image sequence captured under circular motion. This method involves recovering the 3D shape by reconstructing a set of 3D rim curves. The position of each point on a 3D rim curve is estimated by using three or more views. Two or more of these views are chosen close to each other to guarantee good image point matching, while one or more views are chosen far from these views to properly compensate for the error introduced in the triangulation scheme by the short baseline of the close views. Image point matching among all views is performed using a new method that suitably combines epipolar geometry and cross-correlation. Second, we develop the one line search (OLS) method for estimating the 3D model of an object from a sequence of images. The recovered object comprises a set of 3D rim curves. The OLS method determines the image point correspondences of each 3D point through a single line search along the ray defined by the camera center and each two-dimensional (2D) point where a photo-consistency index is maximized. In accordance with the approach, the search area is independently reduced to a line segment on the number of views. The key advantage of the proposed method is that only one variable is focused on in defining the corresponding 3D point, whereas the approaches for multiple-view stereo typically exploit multiple epipolar lines and hence require multiple variables. Third, we propose the expectation conditional maximization for point registration (ECMPR) algorithm to solve the rigid point registration problem by fitting the problem into the framework of maximum likelihood with missing data. The unknown correspondences are handled via mixture models. We derive a maximization criterion based on the expected complete-data log-likelihood. Then, the point registration problem can be solved by an instance of the expectation conditional maximization algorithm, that is, the ECMPR algorithm. Experiments with synthetic and real data are presented in each section. The proposed approaches provide satisfactory and promising results.
published_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
28

Jakubík, Tomáš. "Metoda sledování příznaků pro registraci sekvence medicínských obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219636.

Full text
Abstract:
The aim of this thesis is to familiarize with the issue of registration of medical image sequences. The main objective was to focus on the method of feature tracking in the image and various options of its implementation. The theoretical part describes various methods for detection of feature points and future point matching methods. In the practical part these methods were implemented in Matlab programming environment and a simple graphical user interface was created.
APA, Harvard, Vancouver, ISO, and other styles
29

Haydar, Lazem Al-Saadi Adel. "Approximation of antenna patterns by means of a combination of Gaussian beams." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17937.

Full text
Abstract:
Modeling of electromagnetic wave propagation in terms of Gaussian beams (GBs) has been considered in recent years. The incident radiation is expanded in terms of GBs by means of the point matching method. The simultaneous equations can be solved directly to produce excitation coefficients that generate the approximate pattern of a known antenna. Two different types of antenna patterns have been approximated in terms of GBs: a truncated antenna pattern and a hyperbolic antenna pattern. The influence of the Gaussian beam parameters on the approximation process is clarified.
APA, Harvard, Vancouver, ISO, and other styles
30

Pitcher, Courtney Richard. "Matching optical coherence tomography fingerprint scans using an iterative closest point pipeline." Master's thesis, Faculty of Science, 2021. http://hdl.handle.net/11427/33923.

Full text
Abstract:
Identifying people from their fingerprints is based on well established technology. However, a number of challenges remain, notably overcoming the low feature density of the surface fingerprint and suboptimal feature matching. 2D contact based fingerprint scanners offer low security performance, are easy to spoof, and are unhygienic. Optical Coherence Tomography (OCT) is an emerging technology that allows a 3D volumetric scan of the finger surface and its internal microstructures. The junction between the epidermis and dermis - the internal fingerprint - mirrors the external fingerprint. The external fingerprint is prone to degradation from wear, age, or disease. The internal fingerprint does not suffer these deficiencies, which makes it a viable candidate zone for feature extraction. We develop a biometrics pipeline that extracts and matches features from and around the internal fingerprint to address the deficiencies of contemporary 2D fingerprinting. Eleven different feature types are explored. For each type an extractor and Iterative Closest Point (ICP) matcher is developed. ICP is modified to operate in a Cartesiantoroidal space. Each of these features are matched with ICP against another matcher, if one existed. The feature that has the highest Area Under the Curve (AUC) on an Receiver Operating Characteristic of 0.910 is a composite of 3D minutia and mean local cloud, followed by our geometric properties feature, with an AUC of 0.896. By contrast, 2D minutiae extracted from the internal fingerprint achieved an AUC 0.860. These results make our pipeline useful in both access control and identification applications. ICP offers a low False Positive Rate and can match ∼30 composite 3D minutiae a second on a single threaded system, which is ideal for access control. Identification systems require a high True Positive and True Negative Rate, in addition time is a less stringent requirement. New identification systems would benefit from the introduction of an OCT based pipeline, as all the 3D features we tested provide more accurate matching than 2D minutiae. We also demonstrate that ICP is a viable alternative to match traditional 2D features (minutiae). This method offers a significant improvement over the popular Bozorth3 matcher, with an AUC of 0.94 for ICP versus 0.86 for Bozorth3 when matching a highly distorted dataset generated with SFinGe. This compatibility means that ICP can easily replace other matchers in existing systems to increase security performance.
APA, Harvard, Vancouver, ISO, and other styles
31

SILVA, EUGENIO DA. "HISTORY MATCHING IN RESERVOIR SIMULATION MODELS BY GENETIC ALGORITHMS AND MULTIPLE-POINT GEOSTATISTICS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=19629@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Na área de Exploração e Produção (EeP) de petróleo, o estudo minucioso das características de um reservatório é imperativo para a criação de modelos de simulação que representem adequadamente as suas propriedades petrofísicas. A disponibilidade de um modelo adequado é fundamental para a obtenção de previsões acertadas acerca da produção do reservatório, e isso impacta diretamente a tomada de decisões gerenciais. Devido às incertezas inerentes ao processo de caracterização, ao longo da vida produtiva do reservatório, periodicamente o seu modelo de simulação correspondente precisa ser ajustado. Todavia, a tarefa de ajustar as propriedades do modelo se traduz em um problema de otimização complexo, onde o número de variáveis envolvidas é tão maior quanto maior for a quantidade de blocos que compõem a malha do modelo de simulação. Na maioria das vezes esses ajustes envolvem processos empíricos que demandam elevada carga de trabalho do especialista. Esta pesquisa investiga e avalia uma nova técnica computacional híbrida, que combina Algoritmos Genéticos e Geoestatística Multiponto, para a otimização de propriedades em modelos de reservatórios. Os resultados obtidos demonstram a robustez e a confiabilidade da solução proposta, uma vez que, diferentemente das abordagens tradicionalmente adotadas, é capaz de gerar modelos que não apenas proporcionam um ajuste adequado das curvas de produção, mas também que respeitam as características geológicas do reservatório.
In the Exploration and Production (EeP) of oil, the detailed study of reservoir characteristics is imperative for the creation of simulation models that adequately represent their petrophysical properties. The availability of an appropriate model is fundamental to obtaining accurate predictions about the reservoir production. In addition, this impacts directly the management decisions. Due to the uncertainties inherent in the characterization process, along the productive period of the reservoir, its corresponding simulation model needs to be matched periodically. However, the task of matching the model properties represents a complex optimization problem. In this case, the number of variables involved increases with the number of blocks that make up the grid of the simulation model. In most cases these matches involve empirical processes that take too much time of an expert. This research investigates and evaluates a new hybrid computer technique, which combines Genetic Algorithms and Multipoint Geostatistics, for the optimization of properties in reservoir models. The results demonstrate the robustness and reliability of the proposed solution. Unlike traditional approaches, it is able to generate models that not only provide a proper match of the production curves, but also satisfies the geological characteristics of the reservoir.
APA, Harvard, Vancouver, ISO, and other styles
32

Gomes, Ana Sofia Ferrada. "Matching CO2 large point sources and potential geological storage sites in mainland Portugal." Master's thesis, FCT - UNL, 2008. http://hdl.handle.net/10362/1884.

Full text
Abstract:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia do Ambiente, Perfil Gestão e Sistemas Ambientais
Fossil fuel combustion is the major source of the increasing atmospheric concentration of carbone dioxide (CO2) since the pre-industrial period. Combustion systems like power plants, cement, iron and steel production plants and refineries are the main stationary sources of CO2 emissions. The reduction of greenhouse gas emissions in one of the main climate change mitigation measures. Carbon dioxide capture and storage (CCS) is one of the possible mitigation measures. The objective of this study was to analyze the hypothesis for the implementation of CCS systems in mainland Portugal based on source-sink matching. The CO2 large point sources (LPS) considered in mainland Portugal were the largest installations included in the Phase II of the European Emissions Trading Scheme with the highest CO2 emissions, representing about 90% of the total CO2 emissions of the Trading Scheme, verified in 2007. The potential geological storage locations considered were the geological formations formerly identified in existing studies. After the mapping of LPS and potential geological sinks of mainland Portugal, an analysis based on the proximity of the sources and storage sites was performed. From this it was possible to conclude that a large number of LPS are within or near the potential storage areas. An attempt of estimating costs of implementing a CCS system in mainland Portugal was also performed, considering the identified LPS and storage areas. This cost estimate was a very rough exercise but can allow an order of magnitude of the costs of implementing a CCS system in mainland Portugal. Preliminary results suggest that at present CCS systems are not economically interesting in Portugal, but this may change with increasing costs of energy and emission permits. The present lack of information regarding geological storage sites is an important limitation for the assessment of implementing a CCS system in mainland Portugal. Further detailed studies are required, starting with the characterisation of geological sites and the candidate sources to CCS, from technical aspects to environmental and economical factors.
APA, Harvard, Vancouver, ISO, and other styles
33

OLIVEIRA, RAFAEL LIMA DE. "HISTORY MATCHING IN RESERVOIR SIMULATION MODELS BY COEVOLUTIONARY GENETIC ALGORITHMS AND MULTIPLE-POINT GEOESTATISTICS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=35313@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Na área de Exploração e Produção (EeP) de petróleo, uma das tarefas mais importantes é o estudo minucioso das características do reservatório para a criação de modelos de simulação que representem adequadamente as suas características. Durante a vida produtiva de um reservatório, o seu modelo de simulação correspondente precisa ser ajustado periodicamente, pois a disponibilidade de um modelo adequado é fundamental para a obtenção de previsões acertadas acerca da produção, e isto impacta diretamente a tomada de decisões gerenciais. O ajuste das propriedades do modelo se traduz em um problema de otimização complexo, onde a quantidade de variáveis envolvidas cresce com o aumento do número de blocos que compõem a malha do modelo de simulação, exigindo muito esforço por parte do especialista. A disponibilidade de uma ferramenta computacional, que possa auxiliar o especialista em parte deste processo, pode ser de grande utilidade tanto para a obtenção de respostas mais rápidas, quanto para a tomada de decisões mais acertadas. Diante disto, este trabalho combina inteligência computacional através de Algoritmo Genético Co-Evolutivo com Geoestatística de Múltiplos Pontos, propondo e implementando uma arquitetura de otimização aplicada ao ajuste de propriedades de modelos de reservatórios. Esta arquitetura diferencia-se das tradicionais abordagens por ser capaz de otimizar, simultaneamente, mais de uma propriedade do modelo de simulação de reservatório. Utilizou-se também, processamento distribuído para explorar o poder computacional paralelo dos algoritmos genéticos. A arquitetura mostrou-se capaz de gerar modelos que ajustam adequadamente as curvas de produção, preservando a consistência e a continuidade geológica do reservatório obtendo, respectivamente, 98 por cento e 97 por cento de redução no erro de ajuste aos dados históricos e de previsão. Para os mapas de porosidade e de permeabilidade, as reduções nos erros foram de 79 por cento e 84 por cento, respectivamente.
In the Exploration and Production (EeP) of oil, one of the most important tasks is the detailed study of the characteristics of the reservoir for the creation of simulation models that adequately represent their characteristics. During the productive life of a reservoir, its corresponding simulation model needs to be adjusted periodically because the availability of an appropriate model is crucial to obtain accurate predictions about the production, and this directly impacts the management decisions. The adjustment of the properties of the model is translated into a complex optimization problem, where the number of variables involved increases with the increase of the number of blocks that make up the mesh of the simulation model, requiring too much effort on the part of a specialist. The availability of a computational tool that can assist the specialist on part of this process can be very useful both for obtaining quicker responses, as for making better decisions. Thus, this work combines computational intelligence through Coevolutionary Genetic Algorithm with Multipoint Geostatistics, proposing and implementing an architecture optimization applied to the tuning properties of reservoir models. This architecture differs from traditional approaches to be able to optimize simultaneously more than one property of the reservoir simulation model. We used also distributed processing to explore the parallel computing power of genetic algorithms. The architecture was capable of generating models that adequately fit the curves of production, preserving the consistency and continuity of the geological reservoir obtaining, respectively, 98 percent and 97 percent of reduction in error of fit to the historical data and forecasting. For porosity and permeability maps, the reductions in errors were 79 percent and 84 percent, respectively.
APA, Harvard, Vancouver, ISO, and other styles
34

Ifrah, Philip. "Tree search and singular value decomposition : a comparison of two strategies for point-pattern matching." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=27229.

Full text
Abstract:
Two approaches for solving point-pattern matching problems are compared; namely, a graph-matching algorithm (1) and an SVD-based procedure (2). In both cases, the features that are used in the matching process are point coordinates in Euclidean n-space, ${ rm I !E} sp{n}.$ The patterns being matched are assumed to be related by a combination of two transformations: (1) a permutation of the feature points which establishes the correspondence between the feature points of the different patterns and (2) a global geometric transformation based on rigid motions which aligns the patterns once the point correspondences are known. The problem of finding the first transformation, known as the point correspondence problem, is the most demanding part of the matching process in terms of computational requirements; accordingly, the focus is placed on the algorithms' ability to establish point correspondences. Computer simulations are used to evaluate the performance of the algorithms' respective search strategies in terms of both the accuracy of the final solution and the speed with which the solution is obtained. In all of the experiments, the performance of the graph matching algorithm is clearly superior to that of the SVD-based method in terms of both speed and accuracy; however, it is shown that the computational requirements of the tree search procedure used by the graph matching algorithm are strongly dependent on factors such as the magnitude of the noises that are contained in the patterns and on the mutual distances between the feature points. The major weakness of the SVD-based algorithm is its inconsistency in converging to the expected solution, especially when extra or occluded points are present in one or more of the patterns to be matched.
APA, Harvard, Vancouver, ISO, and other styles
35

Ifrah, Philip Isaac. "Tree search and singular value decomposition, a comparison of two strategies for point-pattern matching." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq29602.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bingham, Mark. "An interest point based illumination condition matching approach to photometric registration within augmented reality worlds." Thesis, University of Huddersfield, 2011. http://eprints.hud.ac.uk/id/eprint/11048/.

Full text
Abstract:
With recent and continued increases in computing power, and advances in the field of computer graphics, realistic augmented reality environments can now offer inexpensive and powerful solutions in a whole range of training, simulation and leisure applications. One key challenge to maintaining convincing augmentation, and therefore user immersion, is ensuring consistent illumination conditions between virtual and real environments, so that objects appear to be lit by the same light sources. This research demonstrates how real world lighting conditions can be determined from the two-dimensional view of the user. Virtual objects can then be illuminated and virtual shadows cast using these conditions. This new technique uses pairs of interest points from real objects and the shadows that they cast, viewed from a binocular perspective, to determine the position of the illuminant. This research has been initially focused on single point light sources in order to show the potential of the technique and has investigated the relationships between the many parameters of the vision system. Optimal conditions have been discovered by mapping the results of experimentally varying parameters such as FoV, camera angle and pose, image resolution, aspect ratio and illuminant distance. The technique is able to provide increased robustness where greater resolution imagery is used. Under optimal conditions it is possible to derive the position of a real world light source with low average error. An investigation of available literature has revealed that other techniques can be inflexible, slow, or disrupt scene realism. This technique is able to locate and track a moving illuminant within an unconstrained, dynamic world without the use of artificial calibration objects that would disrupt scene realism. The technique operates in real-time as the new algorithms are of low computational complexity. This allows high framerates to be maintained within augmented reality applications. Illuminant updates occur several times a second on an average to high end desktop computer. Future work will investigate the automatic identification and selection of pairs of interest points and the exploration of global illuminant conditions. The latter will include an analysis of more complex scenes and the consideration of multiple and varied light sources.
APA, Harvard, Vancouver, ISO, and other styles
37

Sherkat, Navid. "Approximation of Antenna Patterns With Gaussian Beams in Wave Propagation Models." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-14437.

Full text
Abstract:
The topic of antenna pattern synthesis, in the context of beam shaping, is considered. One approach to this problem is to use the method of point matching. This method can be used to approximate antenna patterns with a set of uniformly spaced sources with suitable directivities. One specifies a desired antenna pattern and approximates it with a combination of beams. This approach results in a linear system of equations that can be solved for a set of beam coefficients. With suitable shifts between the matching points and between the source points, a good agreement between the assumed and the reproduced antenna patterns can be obtained along an observation line. This antenna modelling could be used in the program NERO to compute the field at the receiver antenna for a realistic 2D communication link. It is verified that the final result is not affected by the details of the antenna modelling.
APA, Harvard, Vancouver, ISO, and other styles
38

Zavalina, Viktoriia. "Identifikace objektů v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220364.

Full text
Abstract:
Master´s thesis deals with methods of objects detection in the image. It contains theoretical, practical and experimental parts. Theoretical part describes image representation, the preprocessing image methods, and methods of detection and identification of objects. The practical part contains a description of the created programs and algorithms which were used in the programs. Application was created in MATLAB. The application offers intuitive graphical user interface and three different methods for the detection and identification of objects in an image. The experimental part contains a test results for an implemented program.
APA, Harvard, Vancouver, ISO, and other styles
39

Vincent, Etienne. "On feature point matching, in the calibrated and uncalibrated contexts, between widely and narrowly separated images." Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29179.

Full text
Abstract:
In this work, the correspondence problem for feature points between images is investigated. In this context, two important factors greatly influence the choice of a strategy: whether the camera system is calibrated or not, and how large is the separation between viewpoints. This work is divided into four parts, for the four important matching situations generated by these two factors. In the case of uncalibrated narrowly separated views, a framework for empirically evaluating matching constraints is presented. Then, various new and existing constraints are compared. In the case of calibrated narrowly separated views, a new type of feature is introduced, epipolar gradient features. These are then shown to be especially appropriate for matching in the context of quick reconstruction. The features are then matched with a new constraint based on trinocular line transfer. In the case of uncalibrated widely separated views, it is shown how the shape of feature points can be used to recover local perspective deformation between two views, and improve matching results. To this end, a new corner detector that generates the required information is also introduced. In the case of calibrated widely separated views, a more accurate estimate of local perspective deformation is obtained by incorporating the knowledge of the epipolar geometry. An application to fundamental matrix estimation is also introduced.
APA, Harvard, Vancouver, ISO, and other styles
40

Sweeten, Gary Allen. "Causal Inference with group-based trajectories and propensity score matching is high school dropout a turning point? /." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3504.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Criminology and Criminal Justice. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
41

Breuel, Thomas M. "Geometric Aspects of Visual Object Recognition." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/7342.

Full text
Abstract:
This thesis presents there important results in visual object recognition based on shape. (1) A new algorithm (RAST; Recognition by Adaptive Sudivisions of Tranformation space) is presented that has lower average-case complexity than any known recognition algorithm. (2) It is shown, both theoretically and empirically, that representing 3D objects as collections of 2D views (the "View-Based Approximation") is feasible and affects the reliability of 3D recognition systems no more than other commonly made approximations. (3) The problem of recognition in cluttered scenes is considered from a Bayesian perspective; the commonly-used "bounded-error errorsmeasure" is demonstrated to correspond to an independence assumption. It is shown that by modeling the statistical properties of real-scenes better, objects can be recognized more reliably.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhen-YuWeng and 翁振育. "HDR aligment with matching SURF feature points." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/u7dz8r.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
105
High dynamic range, HDR, means the range of the maximum and the minimum radiance value in scene. The sensitivity of human eyes is usually 100,000,000:1, but camera limited by hardware storage memory is usually 1000:1. In order to make images in scene close to observation with human eyes, HDR will store the information of images which are based on their exposure time. Finally, with tone mapping, the value of pixel of image will be rebuilt. However, process to synchronize the multiple images generates the human and natural factors caused by different filming time. Therefore, before multiple images are synchronized, they will align first. In this paper, we discuss alignment about different dynamic range images. With Speeded Up Robust Features, SURF, we extract the feature points from images and match these points. Next, we build affine transform to align these images with feature points. Finally, the result of high dynamic range will be synchronized.
APA, Harvard, Vancouver, ISO, and other styles
43

Chou, Yi-Hsiu, and 周意秀. "The Extraction and Matching of Feature Points." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/58202580947758496502.

Full text
Abstract:
碩士
國立交通大學
資訊科學系
90
Many computer vision tasks, e.g., camera calibration and 3-D reconstruction, rely on feature points extraction and matching. In this thesis, we design a system that can automatically extract feature points and find the matching pairs between two images or in an image sequence. The goal is to provide at least 8 matching pairs for the computation required in camera calibration and 3-D reconstruction. The system doesn’t need any prior information about the scene or the camera. The only constraint is that any two successive images should be fairly similar. Otherwise, feature points matching will be very difficult and time-consuming. The system simulation results shows that the correctness and the efficiency of the feature point extraction and matching are improved by such an approach which uses the edge information to assist the matching process.
APA, Harvard, Vancouver, ISO, and other styles
44

Chiu, Han-Pang, and Tomás Lozano-Pérez. "Matching Interest Points Using Projective Invariant Concentric Circles." 2004. http://hdl.handle.net/1721.1/7426.

Full text
Abstract:
We present a new method to perform reliable matching between different images. This method exploits a projective invariant property between concentric circles and the corresponding projected ellipses to find complete region correspondences centered on interest points. The method matches interest points allowing for a full perspective transformation and exploiting all the available luminance information in the regions. Experiments have been conducted on many different data sets to compare our approach to SIFT local descriptors. The results show the new method offers increased robustness to partial visibility, object rotation in depth, and viewpoint angle change.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
45

Cai, Yao Hong, and 蔡耀弘. "A fast fattern matching algorithm on dominant points." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/98320112195391850905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tseng, Moa-Ching, and 曾茂清. "A Study on Expression Invariant Feature Points for Human Faces Matching." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/19126274649311324138.

Full text
Abstract:
碩士
國立交通大學
應用數學系數學建模與科學計算碩士班
100
In this study, we give an overview of some common local shape feature descriptors. Their concepts, properties and shortcomings are organized according to lots of literature. We then provide a discussion of facial feature extraction methods. Based on different local feature descriptors, we enumerate the corresponding methods and algorithms for the frontal facial scan. Then we discuss the problems caused by changing pose and expression variation respectively in detail and propose some ideals to address the problems. We conclude with a summary and promising future research directions for solving the problem of mouth feature points extraction.
APA, Harvard, Vancouver, ISO, and other styles
47

Fang, Gang. "Representative ridge points in fingerprints A modified minutiae matching algorithm and analysis of individuality /." 2007. http://proquest.umi.com/pqdweb?did=1320975101&sid=9&Fmt=2&clientId=39334&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (M.S.)--State University of New York at Buffalo, 2007.
Title from PDF title page (viewed on Nov. 09, 2007) Available through UMI ProQuest Digital Dissertations. Thesis adviser: Srihari, Sargur N. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
48

Yao-BinYang and 楊矅賓. "Fast Affine Template Matching using Coarse-to-Fine Optimal Search with Distributed Sampling Points." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/28613149533692220674.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系
103
In recent years, algorithms of image analyses have been important since rise of human-computer interaction and industrial automation. Applications in these fields are about interactions with actual objects through image information for a specific purpose. How to accurately and quickly analyze necessary information has become a primary objective. In order to obtain information of a specific pattern in an image, template matching becomes an important technology. This thesis presents a solution to a template matching problem using an optimal search. A proposed method can accurately and quickly find locations, scales, and orientations of the specific pattern without analyzing image features. When information about a specific pattern and an image is obtained, a transformation set is approximately retrieved from infinite transformations. Then, the transformation set with sums of absolute differences is evaluated to judge whether to continue the optimal search under restrictions. During the optimal search, relatively poor transformations are removed. Then, the optimal search with the rest of the transformations is done in a small area to find new transformations. After the new transformations is found, evaluations, judgements, and fine searches are performed until a convergence or the maximum number of searches is achieved. Finally, the best transformation is optimally computed.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Ying-Hong, and 陳英鴻. "Tie Point Matching for LiDAR Point Cloud Data." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/09175520693154279451.

Full text
Abstract:
碩士
國立成功大學
測量工程學系碩博士班
92
Abstract   Point cloud data collected by using scanners records the surface information of scanned objects. A complete observation is frequently composed of several scans, so that how to merge multi-scanned data sets becomes an important issue. Finding conjugate points in the overlapped parts of scanned data sets and calculating the coordinate transformation parameters are the common steps of merging point cloud data. However, the distribution of point cloud is not regular, so that there are no direct corresponding points. A conjugate point has to be derived through a match or an analysis of the distributions of points between the conjugate areas. This thesis presents a point cloud matching method to find conjugate points for Lidar data.   The proposed matching method works based on a 3D regular grid structure data which can be obtained by interpolating the point cloud data into a 3D grid. Therefore, 3D Normalized Cross-Correlation Matching (NCC) can be applied. The matching position and matching quality can be estimated by analyzing the NCC coefficients. The first order original moment of NCC coefficients are used to estimate the matching position, and the second order central moments of the NCC coefficients are used to estimate the quality in each direction.   The test data applied in this research includes a set of airborne laser scanning data and a set of ground laser scanning data. The effects of grid size and the use of intensity data in the matching process were analyzed. The experimental results show that 3D grid structuring point cloud data could be matched successfully, and matching quality can be estimated by using the second moments of NCC coefficients.
APA, Harvard, Vancouver, ISO, and other styles
50

Jana, Indrajit. "Matchings Between Point Processes." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography