To see the other types of publications on this topic, follow the link: Camera Projector Calibration.

Dissertations / Theses on the topic 'Camera Projector Calibration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Camera Projector Calibration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hilario, Maria Nadia. "Occlusion detection in front projection environments based on camera-projector calibration." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83866.

Full text
Abstract:
Camera-projector systems are increasingly being used to create large displays for data visualization, immersive environments and augmented reality. Front projection displays, however, suffer from occlusions, resulting in shadows and light being cast, respectively, onto the display and the user. Researchers have begun addressing the issue of occlusion detection to enable dynamic shadow removal and to facilitate automatic user sensing in interactive display applications. A camera-projector system for occlusion detection in front projection environments is presented. The approach is based on offline, camera projector geometric and color calibration, which then enable online, dynamic camera view synthesis of arbitrary projected scenes. Occluded display regions are detected through pixel-wise differencing between predicted and captured camera images. The implemented system is demonstrated for dynamic shadow detection and removal using a dually overlapped projector display.
APA, Harvard, Vancouver, ISO, and other styles
2

Tennander, David. "Automatic Projector Calibration for Curved Surfaces Using an Omnidirectional Camera." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209675.

Full text
Abstract:
This master’s thesis presents one approach to remove distortions generated by projecting onto non flat surfaces. By using an omnidirectional camera a full 360 dome could be calibrated and the corresponding angles between multiple projections could be calculated. The camera was modelled with the Unified Projection Model allowing any omnidirectional camera system to be used. Surface geometry was captured by using Gray code patterns, the optimal image centre was calculated as an quadratic optimisation problem and in the end a Spline surface countering the distortions was generated by using the FAST-LTS regression algorithm. The developed system used a RICOH THETA S camera calibrated by the omnidir module in openCV. A desirable result was achieved and during use of overlapping projectors a maximum error of 0.5° was measured. Testing indicates part of the error could have been introduced in the evaluation measurements. The resulting application is seen as a success and will be used by ÅF Technology AB during calibration of flight simulators.
Denna rapport presenterar en metod för att motverka de distorsioner som uppkommer när en bild projeseras på en icke plan yta. Genom att använda en omnidirectional kamera kan en omslutande dome upplyst av flertalet projektorer bli kalibrerad. Kameran modellerades med The Unified Projection Model då modellen går att anpassa för ett stort antal kamerasystem. Projektorernas bild på ytan lästes av genom att använda Gray kod och sedan beräknades den optimala mittpunkten för den kalibrerade bilden genom att numeriskt lösa ett kvadratiskt NLP problem. Till slut skapas en Spline yta som motvärkar projektionsförvrängningen genom FAST-LTS regression. I den experimentella uppställningen användes en RICOH THETA S kamera som kalibrerades men omnidir modulen i openCV. Ett enligt författarna lyckat resultat uppnåddes och vid överlappning av flertalet projektorer så mättes ett maximalt fel på 0.5° upp. Vidare mätningar antyder att delar av detta fel uppkommit på grund av saknad noggrannhet i utrustningen under evalueringsfasen. Resultatet ses som lyckat och den utvecklade applikationen kommer att användas av ÅF Technology AB vid deras calibrering av flygsimulatorer.
APA, Harvard, Vancouver, ISO, and other styles
3

Korostelev, Michael. "Performance Evaluation for Full 3D Projector Calibration Methods in Spatial Augmented Reality." Master's thesis, Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/213116.

Full text
Abstract:
Electrical and Computer Engineering
M.S.E.E.
Spatial Augment Reality (SAR) has presented itself to be an interesting tool for not only interesting ways to visualize information but to develop creative works in performance arts. The main challenge is to determine accurate geometry of a projection space and determine an efficient and effective way to project digital media and information to create an augmented space. In our previous implementation of SAR, we developed a projector-camera calibration approach using infrared markers. However, the projection suffers severe distortion due to the lack of depth information in the projection space. For this research, we propose to develop a RGBD sensor - projector system to replace our current projector-camera SAR system. Proper calibration between the camera or sensor and projector links vision to projection, answering the question of which point in camera space maps to what point in the space of projection. Calibration will resolve the problem of capturing the geometry of the space and allow us to accurately augment the surfaces of volumetric objects and features. In this work three calibration methods are examined for performance and accuracy. Two of these methods are existing adaptations of 2D camera - projector calibrations (calibration using arbitrary planes and ray-plane intersection) with our third proposed novel technique which utilizes point cloud information from the RGBD sensor directly. Through analysis and evaluation using re-projection error, results are presented, identifying the proposed method as practical and robust.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
4

Mosnier, Jérémie. "Etalonnage d'un système de lumière structurée par asservissement visuel." Thesis, Clermont-Ferrand 2, 2011. http://www.theses.fr/2011CLF22194.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre d'un projet national nommé SRDViand dont le but fut de développer un système robotisé pour le désossage et la découpe des animaux de boucherie. Afin de déterminer les trajectoires de découpe de manière intelligente, un système de lumière structurée a été développé. Il se réfère à des systèmes de vision qui utilisent des modèles de projection de lumière pour des tâches de reconstruction 3D. Afin d'obtenir les meilleurs résultats, la définition d'une nouvelle méthode d'étalonnage pour les systèmes de lumière structurée a été établie. Basé sur un large état de l'art et également sur la proposition d'une classification de ces méthodes, il a été proposé d'étalonner une paire caméra projecteur en utilisant l'asservissement visuel. La validité et les résultats de cette méthode ont été éprouvés sur la base de nombreux tests expérimentaux menés dans le cadre du projet SRDViand. Suite à l'élaboration de cette méthode, un prototype permettant la découpe des bovins a été réalisé
This thesis is part of a national project named SRDViand whose aim was to develop a robotic system for the deboning and cutting of animals meat. To determine the cut paths, a structured light system has been developed. It refers to vision systems that use light projection models for 3D reconstruction tasks. To achieve best results, the definition of a new calibration method for structured light systems was established . Based on a large state of the art and also with a proposed classification of these methods, it has been proposed to calibrate a camera projector pair using visual servoing . The validity and the results of this method were tested on the basis of numerous experimental tests conducted under the SRDViand project. Following the development of this method, a prototype bovine cutting was performed
APA, Harvard, Vancouver, ISO, and other styles
5

Walter, Viktor. "Projekce dat do scény." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-240823.

Full text
Abstract:
The focus of this thesis is the cooperation of cameras and projectors in projection of data into a scene. It describes the means and theory necessary to achieve such cooperation, and suggests tasks for demonstration. A part of this project is also a program capable of using a camera and a projector to obtain necessary parameters of these devices. The program can demonstrate the quality of this calibration by projecting a pattern onto an object according to its current pose, as well as reconstruct the shape of an object with structured light. The thesis also describes some challenges and observations from development and testing of the program.
APA, Harvard, Vancouver, ISO, and other styles
6

Malík, Dalibor. "Zpracování dat z termokamery." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219683.

Full text
Abstract:
The aim of this master’s thesis is to give information about the thermo camera measurement with error minimization. The basic concepts of thermography are explained with an implementation of postprocesing technique which uses graphically modified thermogram back projected to the scene. This is closely related to the scene design, calibration of thermal camera with projector, image rectification, thermogram processing with highlighting of interesting information and implementation of control elements as the user interface. The results obtained are analyzed and re-evaluated.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Liming. "Recalage robuste à base de motifs de points pseudo aléatoires pour la réalité augmentée." Thesis, Ecole centrale de Nantes, 2016. http://www.theses.fr/2016ECDN0025.

Full text
Abstract:
La Réalité Augmentée (RA) vise à afficher des informations numériques virtuelles sur des images réelles. Le recalage est important, puisqu’il permet d'aligner correctement les objets virtuels dans le monde réel. Contrairement au tracking qui recale en utilisant les informations de l’image précédente, la localisation à grande échelle (wide baseline localization) calcule la solution en utilisant uniquement les informations présentes dans l’image courante. Il permet ainsi de trouver des solutions initiales au problème de recalage (initialisation) et, n’est pas sujet aux problèmes de « perte de tracking ». Le problème du recalage en RA est relativement bien étudié dans la littérature, mais les méthodes existantes fonctionnent principalement lorsque la scène augmentée présente des textures. Pourtant, pour le recalage avec les objets peu ou pas texturés, il est possible d’utiliser leurs informations géométriques qui représentent des caractéristiques plus stables que les textures. Cette thèse s’attache au problème de recalage basé sur des informations géométriques, et plus précisément sur les points. Nous proposons deux nouvelles méthodes de recalage de points (RRDM et LGC) robustes et rapides. LGC est une amélioration de la méthode RRDM et peut mettre en correspondance des ensembles de motifs de points 2D ou 3D subissant une transformation dont le type est connu. LGC présente un comportement linéaire en fonction du nombre de points, ce qui permet un tracking en temps-réel. La pertinence de LGC a été illustrée en développant une application de calibration de système projecteur-caméra dont les résultats sont comparables avec l’état de l’art tout en présentant des avantages pour l’utilisateur en termes de taille de mire de calibration
Registration is a very important task in Augmented Reality (AR). It provides the spatial alignment between the real environment and virtual objects. Unlike tracking (which relies on previous frame information), wide baseline localization finds the correct solution from a wide search space, so as to overcome the initialization or tracking failure problems. Nowadays, various wide baseline localization methods have been applied successfully. But for objects with no or little texture, there is still no promising method. One possible solution is to rely on the geometric information, which sometimes does not vary as much as texture or color. This dissertation focuses on new wide baseline localization methods entirely based on geometric information, and more specifically on points. I propose two novel point pattern matching algorithms, RRDM and LGC. Especially, LGC registers 2D or 3D point patterns under any known transformation type and supports multipattern recognitions. It has a linear behavior with respect to the number of points, which allows for real-time tracking. It is applied to multi targets tracking and augmentation, as well as to 3D model registration. A practical method for projector-camera system calibration based on LGC is also proposed. It can be useful for large scale Spatial Augmented Reality (SAR). Besides, I also developed a method to estimate the rotation axis of surface of revolution quickly and precisely on 3D data. It is integrated in a novel framework to reconstruct the surface of revolution on dense SLAM in real-time
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, Roger Correia Pinheiro. "Desenvolvimento e análise de um digitalizador câmera-projetor de alta definição para captura de geometria e fotometria." Universidade Federal de Juiz de Fora (UFJF), 2011. https://repositorio.ufjf.br/jspui/handle/ufjf/3515.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-02T14:44:36Z No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:52:42Z (GMT) No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5)
Made available in DSpace on 2017-03-06T19:52:42Z (GMT). No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5) Previous issue date: 2011-08-26
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Um sistema câmera-projetor é capaz de capturar informação geométrica tridimensional de objetos e ambientes do mundo real. A captura de geometria em tal sistema baseia-se na projeção de luz estruturada sobre um objeto através do projetor, e na captura da cena modulada através da câmera. Com o sistema previamente calibrado, a deformação da luz projetada causada pelo objeto fornece a informação necessária para reconstruir a geometria do mesmo por meio de triangulação. Este trabalho descreve o desenvolvimento de um digitalizador câmera-projetor de alta definição (com resoluções de até 1920x1080 e 1280x720); são detalhadas as etapas e processos que conduzem à reconstrução de geometria, como calibração câmera-projetor, calibração de cores, processamento da imagem capturada e triangulação. O digitalizador desenvolvido utiliza a codificação de luz estruturada (b; s)-BCSL, que emprega a projeção de uma sequência de faixas verticais coloridas sobre a cena. Este esquema de codificação flexível oferece um número variado de faixas para projeção: quanto maior o número de faixas, mais detalhada a geometria capturada. Um dos objetivos deste trabalho é estimar o número limite de faixas (b,s)-BCSL possível dentro das resoluções atuais de vídeo de alta definição. Este número limite é aquele que provê reconstrução densa da geometria alvo, e ao mesmo tempo possui baixo nível de erro. Para avaliar a geometria reconstruída pelo digitalizador para os diversos números de faixas, é proposto um protocolo para avaliação de erro. O protocolo desenvolvido utiliza planos como objetos para mensurar a qualidade de reconstrução geométrica. A partir da nuvem de pontos gerada pelo digitalizador, a equação do plano para a mesma é estimada por meio de mínimos quadrados. Para um número fixo de faixas, são feitas cinco digitalizações independentes do plano: cada digitalização leva a uma equação; também é computado o plano médio, estimado a partir da união das cinco nuvens de pontos. Uma métrica de distância no espaço projetivo é usada para avaliar a precisão e a acurácia de cada número de faixas projetados. Além da avaliação quantitativa, a geometria de vários objetos é apresentada para uma avaliação qualitativa. Os resultados demonstram que a quantidade de faixas limite para vídeos de alta resolução permite uma grande densidade de pontos mesmo em superfícies com alta variação de cores.
A camera-projector system is capable of capturing three-dimensional geometric information of objects and real-world environments. The capture of geometry in such system is based on the projection of structured light over an object by the projector, and the capture of the modulated scene through the camera. With a calibrated system, the deformation of the projected light caused by the object provides the information needed to reconstruct its geometry through triangulation. The present work describes the development of a high definition camera-projector system (with resolutions up to 1920x1080 and 1280x720). The steps and processes that lead to the reconstruction of geometry, such as camera-projector calibration, color calibration, image processing and triangulation, are detailed. The developed scanner uses the (b; s)-BCSL structured light coding, which employs the projection of a sequence of colored vertical stripes on the scene. This coding scheme offers a flexible number of stripes for projection: the higher the number of stripes, more detailed is the captured geometry. One of the objectives of this work is to estimate the limit number of (b; s)-BCSL stripes possible within the current resolutions of high definition video. This limit number is the one that provides dense geometry reconstruction, and at the same has low error. To evaluate the geometry reconstructed by the scanner for a different number of stripes, we propose a protocol for error measurement. The developed protocol uses planes as objects to measure the quality of geometric reconstruction. From the point cloud generated by the scanner, the equation for the same plane is estimated by least squares. For a fixed number of stripes, five independent scans are made for the plane: each scan leads to one equation; the median plane, estimated from the union of the five clouds of points, is also computed. A distance metric in the projective space is used to evaluate the precision and the accuracy of each number of projected stripes. In addition to the quantitative evaluation, the geometry of many objects are presented for qualitative evaluation. The results show that the limit number of stripes for high resolution video allows high density of points even on surfaces with high color variation.
APA, Harvard, Vancouver, ISO, and other styles
9

Zahrádka, Jiří. "Rozšířené uživatelské rozhraní." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-236929.

Full text
Abstract:
This thesis falls into a field of user interface design. It focuses on tangible user interfaces which utilize a camera and projector to augment physical objects with a digital information. It also includes description of calibration of those devices. The primary object of this thesis is the implementation of an augmented user interface for application windows management. The system consists of a stationary camera, overhead projector and movable tangible objects - boards. The boards are equipped with fiducial markers, in order to be tracked in a camera image. The projector displays the conventional desktop onto the table and the tangible objects. For example, application windows can be projected onto some boards, while the windows move and rotate simultaneously with the boards.
APA, Harvard, Vancouver, ISO, and other styles
10

PINHEIRO, SASHA NICOLAS DA ROCHA. "CAMERA CALIBRATION USING FRONTO PARALLEL PROJECTION AND COLLINEARITY OF CONTROL POINTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28011@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Imprescindível para quaisquer aplicações de visão computacional ou realidade aumentada, a calibração de câmera é o processo no qual se obtém os parâmetros intrínsecos e extrínsecos da câmera, tais como distância focal, ponto principal e valores que mensuram a distorção ótica da lente. Atualmente o método mais utilizado para calibrar uma câmera envolve o uso de imagens de um padrão planar em diferentes perspectivas, a partir das quais se extrai pontos de controle para montar um sistema de equações lineares cuja solução representa os parâmetros da câmera, que são otimizados com base no erro de reprojeção 2D. Neste trabalho, foi escolhido o padrão de calibração aneliforme por oferecer maior precisão na detecção dos pontos de controle. Ao aplicarmos técnicas como transformação frontal-paralela, refinamento iterativo dos pontos de controle e segmentação adaptativa de elipses, nossa abordagem apresentou melhoria no resultado do processo de calibração. Além disso, propomos estender o modelo de otimização ao redefinir a função objetivo, considerando não somente o erro de reprojeção 2D, mas também o erro de colinearidade 2D.
Crucial for any computer vision or augmented reality application, the camera calibration is the process in which one gets the intrinsics and the extrinsics parameters of a camera, such as focal length, principal point and distortions values. Nowadays, the most used method to deploy the calibration comprises the use of images of a planar pattern in different perspectives, in order to extract control points to set up a system of linear equations whose solution represents the camera parameters, followed by an optimization based on the 2D reprojection error. In this work, the ring calibration pattern was chosen because it offers higher accuracy on the detection of control points. Upon application of techniques such as fronto-parallel transformation, iterative refinement of the control points and adaptative segmentation of ellipses, our approach has reached improvements in the result of the calibration process. Furthermore, we proposed extend the optimization model by modifying the objective function, regarding not only the 2D reprojection error but also the 2D collinearity error.
APA, Harvard, Vancouver, ISO, and other styles
11

Nagdev, Alok. "Georeferencing digital camera images using internal camera model." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Luong, Quang-Tuan. "Matrice fondamentale et calibration visuelle sur l'environnement. Vers une plus grande autonomie des système robotiques." Phd thesis, Université Paris Sud - Paris XI, 1992. http://tel.archives-ouvertes.fr/tel-00549134.

Full text
Abstract:
Cette thèse s'attaque au problème général de la calibration d'une caméra mobile en utilisant uniquement des vues quelconques de l'environnement, donc sans utiliser de mire, ni de connaissance a priori sur le mouvement de la caméra. La méthode, appelée autocalibration, est fondée sur des propriétés algébriques de géométrie projective. Elle implique dans un premier temps le calcul de la transformation épipolaire grâce à la matrice fondamentale, notion que nous avons définie, qui est d'une importance primordiale pour tous les problèmes de vision où nous ne disposons pas déjà d'une calibration métrique complète. La détermination sans ambiguïté de cette matrice nécessite un minimum de huit correspondances de points. Les premières techniques que nous avons étudiées sont fondées sur la conservation du birapport et une méthode due à Sturm. Elles visent à calculer les épipoles. Nous avons ensuite introduit de multiples critères et paramétrages permettant l'estimation robuste de la matrice fondamentale par des techniques dérivées de l'algorithme de Longuet-Higgins, que nous avons comparées. Nous mettons en évidence le fait qu'une configuration de points particulière, les ensembles de plans, se prête à d'autres méthodes de calcul qui leur sont propres, mais rend de toutes manières l'estimation moins précise. L'influence du choix des mouvements eux-mêmes sur la stabilité du calcul est importante, nous le caractérisons par des calculs de covariance, et expliquons certaines situations grâce à la surface critique dont nous proposons une étude opérationnelle. Dans un second temps, lorsqu'un minimum de trois mouvements a été effectué, nous pouvons obtenir les paramètres intrinsèques de la caméra au moyen d'un système d'équations polynomiales dites de Kruppa, dont nous avons établi quelques importantes propriétés. Nous proposons d'abord une méthode semi-analytique de résolution, puis une approche itérative performante qui nous permet de prendre en compte des longues séquences d'images, ainsi que l'incertitude. Le calcul des paramètres extrinsèques, et une extension de la méthode à la calibration d'un système stéréo par une nouvelle méthode complètent ce travail, dont la partie expérimentale comporte de très nombreuses simulations, ainsi que des exemples réels.
APA, Harvard, Vancouver, ISO, and other styles
13

Amara, Ashwini. "Object Detection and Tracking Using Uncalibrated Cameras." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1184.

Full text
Abstract:
This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object's feature points over frames, tracking the object over frames and analyzing object's motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Bandarupalli, Sowmya. "Vehicle detection and tracking using wireless sensors and video cameras." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/989.

Full text
Abstract:
This thesis presents the development of a surveillance testbed using wireless sensors and video cameras for vehicle detection and tracking. The experimental study includes testbed design and discusses some of the implementation issues in using wireless sensors and video cameras for a practical application. A group of sensor devices equipped with light sensors are used to detect and localize the position of moving vehicle. Background subtraction method is used to detect the moving vehicle from the video sequences. Vehicle centroid is calculated in each frame. A non-linear minimization method is used to estimate the perspective transformation which project 3D points to 2D image points. Vehicle location estimates from three cameras are fused to form a single trajectory representing the vehicle motion. Experimental results using both sensors and cameras are presented. Average error between vehicle location estimates from the cameras and the wireless sensors is around 0.5ft.
APA, Harvard, Vancouver, ISO, and other styles
15

Castanheiro, Letícia Ferrari. "Geometric model of a dual-fisheye system composed of hyper-hemispherical lenses /." Presidente Prudente, 2020. http://hdl.handle.net/11449/192117.

Full text
Abstract:
Orientador: Antonio Maria Garcia Tommaselli
Resumo: A combinação de duas lentes com FOV hiper-hemisférico em posição opostas pode gerar um sistema omnidirecional (FOV 360°) leve, compacto e de baixo custo, como Ricoh Theta S e GoPro Fusion. Entretanto, apenas algumas técnicas e modelos matemáticos para a calibração um sistema com duas lentes hiper-hemisféricas são apresentadas na literatura. Nesta pesquisa, é avaliado e definido um modelo geométrico para calibração de sistemas omnidirecionais compostos por duas lentes hiper-hemisféricas e apresenta-se algumas aplicações com esse tipo de sistema. A calibração das câmaras foi realizada no programa CMC (calibração de múltiplas câmeras) utilizando imagens obtidas a partir de vídeos feitos com a câmara Ricoh Theta S no campo de calibração 360°. A câmara Ricoh Theta S é composto por duas lentes hiper-hemisféricas fisheye que cobrem 190° cada uma. Com o objetivo de avaliar as melhorias na utilização de pontos em comum entre as imagens, dois conjuntos de dados de pontos foram considerados: (1) apenas pontos no campo hemisférico, e (2) pontos em todo o campo de imagem (isto é, adicionar pontos no campo de imagem hiper-hemisférica). Primeiramente, os modelos ângulo equisólido, equidistante, estereográfico e ortogonal combinados com o modelo de distorção Conrady-Brown foram testados para a calibração de um sensor da câmara Ricoh Theta S. Os modelos de ângulo-equisólido e estereográfico apresentaram resultados melhores do que os outros modelos. Portanto, esses dois modelos de projeção for... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The arrangement of two hyper-hemispherical fisheye lenses in opposite position can design a light weight, small and low-cost omnidirectional system (360° FOV), e.g. Ricoh Theta S and GoPro Fusion. However, only a few techniques are presented in the literature to calibrate a dual-fisheye system. In this research, a geometric model for dual-fisheye system calibration was evaluated, and some applications with this type of system are presented. The calibrating bundle adjustment was performed in CMC (calibration of multiple cameras) software by using the Ricoh Theta video frames of the 360° calibration field. The Ricoh Theta S system is composed of two hyper-hemispherical fisheye lenses with 190° FOV each one. In order to evaluate the improvement in applying points in the hyper-hemispherical image field, two data set of points were considered: (1) observations that are only in the hemispherical field, and (2) points in all image field, i.e. adding points in the hyper-hemispherical image field. First, one sensor of the Ricoh Theta S system was calibrated in a bundle adjustment based on the equidistant, equisolid-angle, stereographic and orthogonal models combined with Conrady-Brown distortion model. Results showed that the equisolid-angle and stereographic models can provide better solutions than those of the others projection models. Therefore, these two projection models were implemented in a simultaneous camera calibration, in which the both Ricoh Theta sensors were considered i... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
16

Jordan, Samuel James. "Projector-Camera Calibration Using Gray Code Patterns." Thesis, 2010. http://hdl.handle.net/1974/5911.

Full text
Abstract:
A parameter-free solution is presented for data projector calibration using a single camera and Gray coded structured light patterns. The proposed method assumes that both camera and projector exhibit significant non-linear distortion, and that projection surfaces can be either planar or freeform. The camera is calibrated first through traditional methods, and the calibrated images are then used to detect Gray coded patterns displayed on a surface by the data projector. Projector to camera correspondences are created by decoding the patterns in the camera images to form a 2D correspondence map. Calibrated systems produce geometrically correct, ex- tremely short throw projections, while maintaining or exceeding the projection size of a standard configuration. Qualitative experiments are performed on two baseline images, while quantitative data is recovered from the projected image of a chessboard pattern. A typical throw ratio of 0.5 can be achieved with a pixel distance error below 1.
Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2010-06-29 09:33:50.311
APA, Harvard, Vancouver, ISO, and other styles
17

Che-YungHung and 洪哲詠. "Camera-assisted Calibration Techniques for Merging Multi-projector Systems." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/97628708470738795905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lai, Hsin-Yi, and 賴馨怡. "Calibration of camera and projector with applications on 3D scanning." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/a5rtms.

Full text
Abstract:
碩士
國立交通大學
應用數學系數學建模與科學計算碩士班
105
In this thesis, we propose a 3D point cloud construction algorithm via using projector-camera system. In order to achieve this purpose, the camera and projector property should be accurately estimated by using calibration algorithm. The relationship between camera pixel and projector pixel can be linked by structure light technique. Finally, the object shape can be reconstructed in the form of a point cloud by considering the epipolar geometry.
APA, Harvard, Vancouver, ISO, and other styles
19

Sun, Hung-Chi, and 孫宏奇. "Calibration of Multi-views Projector-Camera System by Using Structured Light and Applications in 3D Scan." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/zb4fu3.

Full text
Abstract:
碩士
國立交通大學
應用數學系數學建模與科學計算碩士班
106
We proposed the calibration algorithms and the 3D reconstruction algorithms for multiple-view projector-camera system with structured light. In order to generate high accuracy calibration data, the projector-camera pair is calibrated by extracting the correspondence of control points between projector and camera with the structured light. 3D Model is reconstructed by computing the depth information with similarity of base line and the distance between two corresponding points in the camera image and projector image. The multiple-view projector-camera system catches the model from an object in different views with turntable . We merge all the data together by iterative closest point algorithm.
APA, Harvard, Vancouver, ISO, and other styles
20

Bélanger, Lucie. "Calibration de systèmes de caméras et projecteurs dans des applications de création multimédia." Thèse, 2009. http://hdl.handle.net/1866/3864.

Full text
Abstract:
Ce mémoire s'intéresse à la vision par ordinateur appliquée à des projets d'art technologique. Le sujet traité est la calibration de systèmes de caméras et de projecteurs dans des applications de suivi et de reconstruction 3D en arts visuels et en art performatif. Le mémoire s'articule autour de deux collaborations avec les artistes québécois Daniel Danis et Nicolas Reeves. La géométrie projective et les méthodes de calibration classiques telles que la calibration planaire et la calibration par géométrie épipolaire sont présentées pour introduire les techniques utilisées dans ces deux projets. La collaboration avec Nicolas Reeves consiste à calibrer un système caméra-projecteur sur tête robotisée pour projeter des vidéos en temps réel sur des écrans cubiques mobiles. En plus d'appliquer des méthodes de calibration classiques, nous proposons une nouvelle technique de calibration de la pose d'une caméra sur tête robotisée. Cette technique utilise des plans elliptiques générés par l'observation d'un seul point dans le monde pour déterminer la pose de la caméra par rapport au centre de rotation de la tête robotisée. Le projet avec le metteur en scène Daniel Danis aborde les techniques de calibration de systèmes multi-caméras. Pour son projet de théâtre, nous avons développé un algorithme de calibration d'un réseau de caméras wiimotes. Cette technique basée sur la géométrie épipolaire permet de faire de la reconstruction 3D d'une trajectoire dans un grand volume à un coût minime. Les résultats des techniques de calibration développées sont présentés, de même que leur utilisation dans des contextes réels de performance devant public.
This thesis focuses on computer vision applications for technological art projects. Camera and projector calibration is discussed in the context of tracking applications and 3D reconstruction in visual arts and performance art. The thesis is based on two collaborations with québécois artists Daniel Danis and Nicolas Reeves. Projective geometry and classical camera calibration techniques, such as planar calibration and calibration from epipolar geometry, are detailed to introduce the techniques implemented in both artistic projects. The project realized in collaboration with Nicolas Reeves consists of calibrating a pan-tilt camera-projector system in order to adapt videos to be projected in real time on mobile cubic screens. To fulfil the project, we used classical camera calibration techniques combined with our proposed camera pose calibration technique for pan-tilt systems. This technique uses elliptic planes, generated by the observation of a point in the scene while the camera is panning, to compute the camera pose in relation to the rotation centre of the pan-tilt system. The project developed in collaboration with Daniel Danis is based on multi-camera calibration. For this studio theatre project, we developed a multi-camera calibration algorithm to be used with a wiimote network. The technique based on epipolar geometry allows 3D reconstruction of a trajectory in a large environment at a low cost. The results obtained from the camera calibration techniques implemented are presented alongside their application in real public performance contexts.
APA, Harvard, Vancouver, ISO, and other styles
21

Dhillon, Daljit Singh J. S. "Geometric And Radiometric Estimation In A Structured-Light 3D Scanner." Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/2270.

Full text
Abstract:
Measuring 3D surface geometry with precision and accuracy is an important part of many engineering and scientific tasks. 3D Scanning techniques measure surface geometry by estimating the locations of sampled surface points. In recent years, Structured-Light 3D scanners have gained significant popularity owing to their ability to produce highly accurate scans in real-time at a low cost. In this thesis we describe an approach for Structured-Light 3D scanning using a digital camera and a digital projector. We utilise the projective geometric relationships between the projector and the camera to carry out both an implicit calibration of the system and to solve for 3D structure. Our approach to geometric calibration is flexible, reliable and amenable to robust estimation. In addition, we model and account for the radiometric non-linearities in the projector such as gamma distortion. Finally, we apply a post-processing step to efficiently smooth out high-frequency surface noise while retaining the structural details. Consequently, the proposed work reduces the computational load and set-up time of a Structured-Light 3D scanner; thereby speeding up the whole scanning process while retaining the ability to generate highly accurate results. We demonstrate the accuracy of our scanning results on real-world objects of varying degrees of surface complexity. Introduction The projective geometry for a pair of pin-hole viewing devices is completely defined by their intrinsic calibration and their relative motion or extrinsic calibration in the form of matrices. For a Euclidean reconstruction, the geometry elements represented by the calibration matrices must be parameterised and estimated in some form. The use of a projector as the ‘second viewing’ device has led to numerous approaches to model and estimate its intrinsic parameters and relative motion with respect to the camera's 3D co-ordinate system. Proposed thesis work assimilates the benefits of projective geometry constructs such as Homography and the invariance of the cross-ratios to simplify the system calibration and the 3D estimation processes by an implicit modeling of the projector's intrinsic parameters and its relative motion. Though linear modeling of the projective geometry between a camera-projector view-pair captures the most essential aspects of the underlying geometry, it does not accommodate system non-linearities due to radiometric distortions of a projector device. We propose an approach that uses parametric splines to model the systematic errors introduced by radiometric non-linearities and thus correct for them. For 3D surfaces reconstructed as point-clouds, noise manifests itself as some high-frequency variations for the resulting mesh. Various pre and/or post processing techniques are proposed in the literature to model and minimize the effects of noise. We use simple bilateral filtering of the depth-map for the reconstructed surface to smoothen the surface while retaining its structural details. Modeling Projective Relations In our approach for calibrating the projective-geometric structure of a projector-camera view-pair, the frame of reference for measurements is attached to the camera. The camera is calibrated using a commonly used method. To calibrate the scanner system, one common approach is to project sinusoidal patterns onto the reference planes to generate reference phase maps. By relating the phase-information between the projector and image pixels, a dense mapping is obtained. However, this is an over-parameterisation of the calibration information. Since the reference object is a plane, we can use the projective relationships induced by a plane to implicitly calibrate the projector geometry. For the estimation of the three-dimensional structure of the imaged object, we utilise the invariance of cross-ratios along with the calibration information of two reference planes. Our formulation is also extensible to utilise more than two reference plane to compute more than one estimate of the location of an unknown surface point. Such estimates are amenable to statistical analysis which allows us to derive both the shape of an object and associate reliability scores to each estimated point location. Radiometric Correction Structured-light based 3D scanners commonly employ phase-shifted sinusoidal patterns to solve for the correspondence problem. For scanners using projective geometry between a camera and a projector, the projector's radiometric non-linearities introduce systematic errors in establishing correspondences. Such errors manifest as visual artifacts which become pronounced when fewer phase-shifted sinusoidal patterns are used. While these artifacts can be avoided by using a large number of phase-shifts, doing so also increases the acquisition time. We propose to model and rectify such systematic errors using parametric representations. Consequently, while some existing methods retain the complete reference phase maps to account for such distortions, our approach describes the deviations using a few model parameters. The proposed approach can be used to reduce the number of phase-shifted sinusoidal patterns required for codification while suppressing systematic artifacts. Additionally, our method avoids the 1D search steps that are needed when a complete reference phase map is used, thus reducing the computational load for 3D estimation. The effectiveness of our method is demonstrated with reconstruction of some geometric surfaces and a cultural figurine. Filtering Noise For a structured-light 3D scanner, various sources of noise in the environment and the devices lead to inaccuracies in estimating the codewords (phase map) for an unknown surface, during reconstruction. We examine the effects of such noise factors on our proposed methods for geometric and radiometric estimation. We present a quantitative evaluation for our proposed method by scanning the objects of known geometric properties or measures and then computing the deviations from the expected results. In addition, we evaluate the errors introduced due to inaccuracies in system calibration by computing the variance statistics from multiple estimates for the reconstructed 3D points, where each estimate is computed using a different pair of reference planes. Finally, we discuss the efficacy of certain filtering techniques in reducing the high-frequency surface noise when applied to: (a) the images of the unknown surface at a pre-processing stage, or (b) the respective phase (or depth) map at a post-processing stage. Conclusion In this thesis, we motivate the need for a procedurally simple and computationally less demanding approach for projector calibration. We present a method that uses homographies induced by a pair of reference planes to calibrate a structured-light scanner. By using the projective invariance of the cross-ratio, we solved for the 3D geometry of a scanned surface. We demonstrate the fact that 3D geometric information can be derived using our approach with accuracy on the order of 0.1 mm. Proposed method reduces the image acquisition time for calibration and the computational needs for 3D estimation. We demonstrate an approach to effectively model radiometric distortions for the projector using cubic splines. Our approach is shown to give significant improvement over the use of complete reference phase maps and its performance is comparable to that of a sate-of-the-art method, both quantitatively as well as qualitatively. In contrast with that method, proposed method is computationally less expensive, procedurally simpler and exhibits consistent performance even at relatively higher levels of noise in phase estimation. Finally, we use a simple bilateral filtering on the depth-map for the region-of-interest. Bilateral filtering provides the best trade-off between surface smoothing and the preservation of its structural details. Our filtering approach avoids computationally expensive surface normal estimation algorithms completely while improving surface fidelity.
APA, Harvard, Vancouver, ISO, and other styles
22

Draréni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision : application au suivi, au calibrage et à la reconstruction." Thèse, 2010. http://hdl.handle.net/1866/4868.

Full text
Abstract:
Cette thèse s’intéresse à trois problèmes fondamentaux de la vision par ordinateur qui sont le suivi vidéo, le calibrage et la reconstruction 3D. Les approches proposées sont strictement basées sur des contraintes photométriques et géométriques présentent dans des images 2D. Le suivi de mouvement se fait généralement dans un flux vidéo et consiste à suivre un objet d’intérêt identifié par l’usager. Nous reprenons une des méthodes les plus robustes à cet effet et l’améliorons de sorte à prendre en charge, en plus de ses translations, les rotations qu’effectue l’objet d’intérêt. Par la suite nous nous attelons au calibrage de caméras; un autre problème fondamental en vision. Il s’agit là, d’estimer des paramètres intrinsèques qui décrivent la projection d’entités 3D dans une image plane. Plus précisément, nous proposons des algorithmes de calibrage plan pour les cam ́eras linéaires (pushbroom) et les vidéo projecteurs lesquels ́etaient, jusque là, calibrés de façon laborieuse. Le troisième volet de cette thèse sera consacré à la reconstruction 3D par ombres projetée. À moins de connaissance à-priori sur le contenu de la scène, cette technique est intrinsèquement ambigüe. Nous proposons une méthode pour réduire cette ambiguïté en exploitant le fait que les spots de lumières sont souvent visibles dans la caméra.
The topic of this thesis revolves around three fundamental problems in computer vision; namely, video tracking, camera calibration and shape recovery. The proposed methods are solely based on photometric and geometric constraints found in the images. Video tracking, usually performed on a video sequence, consists in tracking a region of interest, selected manually by an operator. We extend a successful tracking method by adding the ability to estimate the orientation of the tracked object. Furthermore, we consider another fundamental problem in computer vision: cali- bration. Here we tackle the problem of calibrating linear cameras (a.k.a: pushbroom) and video projectors. For the former one we propose a convenient plane-based cali- bration algorithm and for the latter, a calibration algorithm that does not require a physical grid and a planar auto-calibration algorithm. Finally, we pointed our third research direction toward shape reconstruction using coplanar shadows. This technique is known to suffer from a bas-relief ambiguity if no extra information on the scene or light source is provided. We propose a simple method to reduce this ambiguity from four to a single parameter. We achieve this by taking into account the visibility of the light spots in the camera.
Cette thése a été réalisée dans le cadre d'une cotutelle avec l'Institut National Polytechnique de Grenoble (France). La recherche a été effectuée au sein des laboratoires de vision 3D (DIRO, UdM) et PERCEPTION-INRIA (Grenoble).
APA, Harvard, Vancouver, ISO, and other styles
23

Bouchard, Louis. "Reconstruction tridimensionnelle pour projection sur surfaces arbitraires." Thèse, 2013. http://hdl.handle.net/1866/9826.

Full text
Abstract:
Ce mémoire s'inscrit dans le domaine de la vision par ordinateur. Elle s'intéresse à la calibration de systèmes de caméras stéréoscopiques, à la mise en correspondance caméra-projecteur, à la reconstruction 3D, à l'alignement photométrique de projecteurs, au maillage de nuages de points, ainsi qu'au paramétrage de surfaces. Réalisé dans le cadre du projet LightTwist du laboratoire Vision3D, elle vise à permettre la projection sur grandes surfaces arbitraires à l'aide de plusieurs projecteurs. Ce genre de projection est souvent utilisé en arts technologiques, en théâtre et en projection architecturale. Dans ce mémoire, on procède au calibrage des caméras, suivi d'une reconstruction 3D par morceaux basée sur une méthode active de mise en correspondance, la lumière non structurée. Après un alignement et un maillage automatisés, on dispose d'un modèle 3D complet de la surface de projection. Ce mémoire introduit ensuite une nouvelle approche pour le paramétrage de modèles 3D basée sur le calcul efficace de distances géodésiques sur des maillages. L'usager n'a qu'à délimiter manuellement le contour de la zone de projection sur le modèle. Le paramétrage final est calculé en utilisant les distances obtenues pour chaque point du modèle. Jusqu'à maintenant, les méthodes existante ne permettaient pas de paramétrer des modèles ayant plus d'un million de points.
This thesis falls within the field of computer vision. It focuses on stereoscopic camera calibration, camera-projector matching, 3D reconstruction, projector blending, point cloud meshing, and surface parameterization. Conducted as part of the LightTwist project at the Vision3D laboratory, the work presented in this thesis aims to facilitate video projections on large surfaces of arbitrary shape using more than one projector. This type of projection is often seen in theater, digital arts, and architectural projections. To this end, we begin with the calibration of the cameras, followed by a piecewise 3D reconstruction using an active unstructured light scanning method. An automated alignment and meshing of the partial reconstructions yields a complete 3D model of the projection surface. This thesis then introduces a new approach for the parameterization of 3D models based on an efficient computation of geodesic distances across triangular meshes. The only input required from the user is the manual selection of the boudaries of the projection area on the model. The final parameterization is computed using the geodesic distances obtained for each of the model's vertices. Until now, existing methods did not permit the parameterization of models having a million vertices or more.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Yen-Nien. "Geometric and Photometric Calibration for Tiling Multiple Projectors with a Pan-Tilt-Zoom Camera." 2004. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1607200400163300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Yen-Nien, and 吳延年. "Geometric and Photometric Calibration for Tiling Multiple Projectors with a Pan-Tilt-Zoom Camera." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/13351493115037455304.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
92
Over the past few years a considerable number of studies have been made on building large, high-resolution displays. The aim of this thesis is to build a seamless large-scale display system by tiling multiple projectors with the help of a pan-tilt-zoom camera. It adds to the display world by combining different techniques to create a large, high-resolution, and low-cost display system. This research takes two tasks into consideration when producing the seamless display: (1) the geometric calibration and (2) the photometric calibration. Here, I develop a vision-based approach to accomplish these tasks by utilizing a pan-tilt-zoom video camera. Compared to previous work, my method for geometric calibration is more accurate because I have a much higher resolution (because I combine several zoom-in images). And it does not require a camera calibration, nor does it require the 3D geometry estimation of the display surface. As opposed to using the spectroradiometer, an expensive device previously used for color calibration, a camera is used due to its low cost. I take advantage of high dynamic range image to estimate the real color, and with a mapping between it and the colorimeter. With photometric calibration, the color across the different projectors will become uniform. This system has great potential for many applications that require large format and high resolution display, such as scientific visualizations and visual display wall.
APA, Harvard, Vancouver, ISO, and other styles
26

Carapinha, Rui Filipe Santos. "Project i-RoCS: dirt detection system based on computer vision." Master's thesis, 2020. http://hdl.handle.net/10773/30482.

Full text
Abstract:
The ultimate goal of the i-RoCs project is to provide an e cient automatic robotic solution to clean industrial oors. The solution will integrate stateof- art computer vision algorithms for the navigation of the robot and for the monitoring of the cleaning process. Industrial oor cleaning is one of the most important tasks for the security of the personnel in a factory. In the worst case, a damaged/slippery oor can lead to the most various accidents. This is the main reason why the most advanced technologies should be involved in this area. In this thesis we pretend to give a step towards that goal. Digital cameras with the proper use and the proper algorithms can be one of the most rich sensors that can be used in the industrial environment due to the information they can capture. This information is a conversion of the real world into digital information that can be further processed. From this information, low-level computer vision algorithms can detect a lot of features from an image such as colors, lines, blobs, contours, edges, patterns, among others. In this thesis, we give an introduction of state-of-art technology to the cleaning task in a factory. For that purpose, we present a study about the implementation of cameras and digital image processing to detect dirt in industrial oors. We propose a method for automatic calibration of the camera parameters to tackle the di cult environment that can be found inside factories in terms of the light conditions. We developed algorithms for extraction of low-level characteristics to be used in the detection of dirt that obtained promising results in terms of detection results. However, they are not satisfactory in terms of performance if we consider them to be applied in real time on a mobile robot. The last step was the implementation of Deep Learning, one of the most promising technologies of the past few years used in image processing. This proposed solution is a segmentation network followed by a regression network. The segmentation will classify the several types of patterns existing on the ground and the regression will output the level of dirtiness of each area.
O objectivo final do projecto i-RoCs é fornecer uma solução robótica automática e eficiente para a limpeza de pavimentos industriais. A solução integrará algoritmos de visão por computador de última geração para a navegação do robô e para a monitorização do processo de limpeza. A limpeza de superfícies industriais é uma das tarefas mais importantes para a segurança do pessoal de uma fábrica. No pior dos casos, um piso danificado/escorregadio pode levar aos mais variados acidentes. Esta é a principal razão pela qual as tecnologias mais avançadas devem estar envolvidas nesta área. Nesta tese é dado um passo nesse sentido. As câmaras digitais com o uso adequado e os algoritmos adequados podem ser um dos sensores mais ricos que podem ser utilizados no ambiente industrial devido à informação que podem captar. Esta informação é uma conversão do mundo real em informação digital que pode ser processada posteriormente. A partir desta informação os algoritmos de baixo nível de visão por computador podem detectar muitas características tais como cores, linhas, formas, contornos, bordas, entre outros. Nesta tese, é feita uma introdução de tecnologia de ponta para a tarefa de limpeza de uma fábrica. Para tal, apresentamos um estudo sobre a implementação de câmaras e processamento digital de imagem para detetar sujidade em pavimentos industriais. É proposto um método de calibração automática dos parâmetros da câmara para enfrentar o ambiente difícil que pode ser encontrado dentro das fábricas em termos das condições de luz. Desenvolvemos algoritmos de extracção de características de baixo nível a utilizar na deteção de sujidade que obtiveram bons resultados em termos de detecção. No entanto, não são satisfatórios em termos de desempenho se considerarmos que serão aplicados num robô móvel. O último passo foi a implementação de algoritmos baseados no Deep Learning, uma das tecnologias mais promissoras dos últimos anos, utilizada no processamento de imagens. Esta solução proposta é uma rede de segmentação seguida de uma rede de regressão. A segmentação irá classificar os vários tipos de padrões existentes no terreno e a regressão irá produzir o nível de sujidade de cada área.
Mestrado em Engenharia Eletrónica e Telecomunicações
APA, Harvard, Vancouver, ISO, and other styles
27

Madeira, Tiago de Matos Ferreira. "Enhancement of RGB-D image alignment using fiducial markers." Master's thesis, 2019. http://hdl.handle.net/10773/29603.

Full text
Abstract:
3D reconstruction is the creation of three-dimensional models from the captured shape and appearance of real objects. It is a field that has its roots in several areas within computer vision and graphics, and has gained high importance in others, such as architecture, robotics, autonomous driving, medicine, and archaeology. Most of the current model acquisition technologies are based on LiDAR, RGB-D cameras, and image-based approaches such as visual SLAM. Despite the improvements that have been achieved, methods that rely on professional instruments and operation result in high costs, both capital and logistical. In this dissertation, we develop an optimization procedure capable of enhancing the 3D reconstructions created using a consumer level RGB-D hand-held camera, a product that is widely available, easily handled, with a familiar interface to the average smartphone user, through the utilisation of fiducial markers placed in the environment. Additionally, a tool was developed to allow the removal of said fiducial markers from the texture of the scene, as a complement to mitigate a downside of the approach taken, but that may prove useful in other contexts.
A reconstrução 3D é a criação de modelos tridimensionais a partir da forma e aparência capturadas de objetos reais. É um campo que teve origem em diversos ramos da visão computacional e computação gráfica, e que ganhou grande importância em áreas como a arquitetura, robótica, condução autónoma, medicina e arqueologia. A maioria das tecnologias de aquisição de modelos atuais são baseadas em LiDAR, câmeras RGB-D e abordagens baseadas em imagens, como o SLAM visual. Apesar das melhorias que foram alcançadas, os métodos que dependem de instrumentos profissionais e da sua operação resultam em elevados custos, tanto de capital, como logísticos. Nesta dissertação foi desenvolvido um processo de otimização capaz de melhorar as reconstruções 3D criadas usando uma câmera RGB-D portátil, disponível ao nível do consumidor, de fácil manipulação e que tem uma interface familiar para o utilizador de smartphones, através da utilização de marcadores fiduciais colocados no ambiente. Além disso, uma ferramenta foi desenvolvida para permitir a remoção dos ditos marcadores fiduciais da textura da cena, como um complemento para mitigar uma desvantagem da abordagem adotada, mas que pode ser útil em outros contextos.
Mestrado em Engenharia de Computadores e Telemática
APA, Harvard, Vancouver, ISO, and other styles
28

Epstein, Emric. "Utilisation de miroirs dans un système de reconstruction interactif." Thèse, 2004. http://hdl.handle.net/1866/16668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography