Dissertations / Theses on the topic 'Camera Projector Calibration'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 28 dissertations / theses for your research on the topic 'Camera Projector Calibration.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hilario, Maria Nadia. "Occlusion detection in front projection environments based on camera-projector calibration." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83866.
Full textTennander, David. "Automatic Projector Calibration for Curved Surfaces Using an Omnidirectional Camera." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209675.
Full textDenna rapport presenterar en metod för att motverka de distorsioner som uppkommer när en bild projeseras på en icke plan yta. Genom att använda en omnidirectional kamera kan en omslutande dome upplyst av flertalet projektorer bli kalibrerad. Kameran modellerades med The Unified Projection Model då modellen går att anpassa för ett stort antal kamerasystem. Projektorernas bild på ytan lästes av genom att använda Gray kod och sedan beräknades den optimala mittpunkten för den kalibrerade bilden genom att numeriskt lösa ett kvadratiskt NLP problem. Till slut skapas en Spline yta som motvärkar projektionsförvrängningen genom FAST-LTS regression. I den experimentella uppställningen användes en RICOH THETA S kamera som kalibrerades men omnidir modulen i openCV. Ett enligt författarna lyckat resultat uppnåddes och vid överlappning av flertalet projektorer så mättes ett maximalt fel på 0.5° upp. Vidare mätningar antyder att delar av detta fel uppkommit på grund av saknad noggrannhet i utrustningen under evalueringsfasen. Resultatet ses som lyckat och den utvecklade applikationen kommer att användas av ÅF Technology AB vid deras calibrering av flygsimulatorer.
Korostelev, Michael. "Performance Evaluation for Full 3D Projector Calibration Methods in Spatial Augmented Reality." Master's thesis, Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/213116.
Full textM.S.E.E.
Spatial Augment Reality (SAR) has presented itself to be an interesting tool for not only interesting ways to visualize information but to develop creative works in performance arts. The main challenge is to determine accurate geometry of a projection space and determine an efficient and effective way to project digital media and information to create an augmented space. In our previous implementation of SAR, we developed a projector-camera calibration approach using infrared markers. However, the projection suffers severe distortion due to the lack of depth information in the projection space. For this research, we propose to develop a RGBD sensor - projector system to replace our current projector-camera SAR system. Proper calibration between the camera or sensor and projector links vision to projection, answering the question of which point in camera space maps to what point in the space of projection. Calibration will resolve the problem of capturing the geometry of the space and allow us to accurately augment the surfaces of volumetric objects and features. In this work three calibration methods are examined for performance and accuracy. Two of these methods are existing adaptations of 2D camera - projector calibrations (calibration using arbitrary planes and ray-plane intersection) with our third proposed novel technique which utilizes point cloud information from the RGBD sensor directly. Through analysis and evaluation using re-projection error, results are presented, identifying the proposed method as practical and robust.
Temple University--Theses
Mosnier, Jérémie. "Etalonnage d'un système de lumière structurée par asservissement visuel." Thesis, Clermont-Ferrand 2, 2011. http://www.theses.fr/2011CLF22194.
Full textThis thesis is part of a national project named SRDViand whose aim was to develop a robotic system for the deboning and cutting of animals meat. To determine the cut paths, a structured light system has been developed. It refers to vision systems that use light projection models for 3D reconstruction tasks. To achieve best results, the definition of a new calibration method for structured light systems was established . Based on a large state of the art and also with a proposed classification of these methods, it has been proposed to calibrate a camera projector pair using visual servoing . The validity and the results of this method were tested on the basis of numerous experimental tests conducted under the SRDViand project. Following the development of this method, a prototype bovine cutting was performed
Walter, Viktor. "Projekce dat do scény." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-240823.
Full textMalík, Dalibor. "Zpracování dat z termokamery." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219683.
Full textYang, Liming. "Recalage robuste à base de motifs de points pseudo aléatoires pour la réalité augmentée." Thesis, Ecole centrale de Nantes, 2016. http://www.theses.fr/2016ECDN0025.
Full textRegistration is a very important task in Augmented Reality (AR). It provides the spatial alignment between the real environment and virtual objects. Unlike tracking (which relies on previous frame information), wide baseline localization finds the correct solution from a wide search space, so as to overcome the initialization or tracking failure problems. Nowadays, various wide baseline localization methods have been applied successfully. But for objects with no or little texture, there is still no promising method. One possible solution is to rely on the geometric information, which sometimes does not vary as much as texture or color. This dissertation focuses on new wide baseline localization methods entirely based on geometric information, and more specifically on points. I propose two novel point pattern matching algorithms, RRDM and LGC. Especially, LGC registers 2D or 3D point patterns under any known transformation type and supports multipattern recognitions. It has a linear behavior with respect to the number of points, which allows for real-time tracking. It is applied to multi targets tracking and augmentation, as well as to 3D model registration. A practical method for projector-camera system calibration based on LGC is also proposed. It can be useful for large scale Spatial Augmented Reality (SAR). Besides, I also developed a method to estimate the rotation axis of surface of revolution quickly and precisely on 3D data. It is integrated in a novel framework to reconstruct the surface of revolution on dense SLAM in real-time
Silva, Roger Correia Pinheiro. "Desenvolvimento e análise de um digitalizador câmera-projetor de alta definição para captura de geometria e fotometria." Universidade Federal de Juiz de Fora (UFJF), 2011. https://repositorio.ufjf.br/jspui/handle/ufjf/3515.
Full textApproved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:52:42Z (GMT) No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5)
Made available in DSpace on 2017-03-06T19:52:42Z (GMT). No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5) Previous issue date: 2011-08-26
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Um sistema câmera-projetor é capaz de capturar informação geométrica tridimensional de objetos e ambientes do mundo real. A captura de geometria em tal sistema baseia-se na projeção de luz estruturada sobre um objeto através do projetor, e na captura da cena modulada através da câmera. Com o sistema previamente calibrado, a deformação da luz projetada causada pelo objeto fornece a informação necessária para reconstruir a geometria do mesmo por meio de triangulação. Este trabalho descreve o desenvolvimento de um digitalizador câmera-projetor de alta definição (com resoluções de até 1920x1080 e 1280x720); são detalhadas as etapas e processos que conduzem à reconstrução de geometria, como calibração câmera-projetor, calibração de cores, processamento da imagem capturada e triangulação. O digitalizador desenvolvido utiliza a codificação de luz estruturada (b; s)-BCSL, que emprega a projeção de uma sequência de faixas verticais coloridas sobre a cena. Este esquema de codificação flexível oferece um número variado de faixas para projeção: quanto maior o número de faixas, mais detalhada a geometria capturada. Um dos objetivos deste trabalho é estimar o número limite de faixas (b,s)-BCSL possível dentro das resoluções atuais de vídeo de alta definição. Este número limite é aquele que provê reconstrução densa da geometria alvo, e ao mesmo tempo possui baixo nível de erro. Para avaliar a geometria reconstruída pelo digitalizador para os diversos números de faixas, é proposto um protocolo para avaliação de erro. O protocolo desenvolvido utiliza planos como objetos para mensurar a qualidade de reconstrução geométrica. A partir da nuvem de pontos gerada pelo digitalizador, a equação do plano para a mesma é estimada por meio de mínimos quadrados. Para um número fixo de faixas, são feitas cinco digitalizações independentes do plano: cada digitalização leva a uma equação; também é computado o plano médio, estimado a partir da união das cinco nuvens de pontos. Uma métrica de distância no espaço projetivo é usada para avaliar a precisão e a acurácia de cada número de faixas projetados. Além da avaliação quantitativa, a geometria de vários objetos é apresentada para uma avaliação qualitativa. Os resultados demonstram que a quantidade de faixas limite para vídeos de alta resolução permite uma grande densidade de pontos mesmo em superfícies com alta variação de cores.
A camera-projector system is capable of capturing three-dimensional geometric information of objects and real-world environments. The capture of geometry in such system is based on the projection of structured light over an object by the projector, and the capture of the modulated scene through the camera. With a calibrated system, the deformation of the projected light caused by the object provides the information needed to reconstruct its geometry through triangulation. The present work describes the development of a high definition camera-projector system (with resolutions up to 1920x1080 and 1280x720). The steps and processes that lead to the reconstruction of geometry, such as camera-projector calibration, color calibration, image processing and triangulation, are detailed. The developed scanner uses the (b; s)-BCSL structured light coding, which employs the projection of a sequence of colored vertical stripes on the scene. This coding scheme offers a flexible number of stripes for projection: the higher the number of stripes, more detailed is the captured geometry. One of the objectives of this work is to estimate the limit number of (b; s)-BCSL stripes possible within the current resolutions of high definition video. This limit number is the one that provides dense geometry reconstruction, and at the same has low error. To evaluate the geometry reconstructed by the scanner for a different number of stripes, we propose a protocol for error measurement. The developed protocol uses planes as objects to measure the quality of geometric reconstruction. From the point cloud generated by the scanner, the equation for the same plane is estimated by least squares. For a fixed number of stripes, five independent scans are made for the plane: each scan leads to one equation; the median plane, estimated from the union of the five clouds of points, is also computed. A distance metric in the projective space is used to evaluate the precision and the accuracy of each number of projected stripes. In addition to the quantitative evaluation, the geometry of many objects are presented for qualitative evaluation. The results show that the limit number of stripes for high resolution video allows high density of points even on surfaces with high color variation.
Zahrádka, Jiří. "Rozšířené uživatelské rozhraní." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-236929.
Full textPINHEIRO, SASHA NICOLAS DA ROCHA. "CAMERA CALIBRATION USING FRONTO PARALLEL PROJECTION AND COLLINEARITY OF CONTROL POINTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28011@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Imprescindível para quaisquer aplicações de visão computacional ou realidade aumentada, a calibração de câmera é o processo no qual se obtém os parâmetros intrínsecos e extrínsecos da câmera, tais como distância focal, ponto principal e valores que mensuram a distorção ótica da lente. Atualmente o método mais utilizado para calibrar uma câmera envolve o uso de imagens de um padrão planar em diferentes perspectivas, a partir das quais se extrai pontos de controle para montar um sistema de equações lineares cuja solução representa os parâmetros da câmera, que são otimizados com base no erro de reprojeção 2D. Neste trabalho, foi escolhido o padrão de calibração aneliforme por oferecer maior precisão na detecção dos pontos de controle. Ao aplicarmos técnicas como transformação frontal-paralela, refinamento iterativo dos pontos de controle e segmentação adaptativa de elipses, nossa abordagem apresentou melhoria no resultado do processo de calibração. Além disso, propomos estender o modelo de otimização ao redefinir a função objetivo, considerando não somente o erro de reprojeção 2D, mas também o erro de colinearidade 2D.
Crucial for any computer vision or augmented reality application, the camera calibration is the process in which one gets the intrinsics and the extrinsics parameters of a camera, such as focal length, principal point and distortions values. Nowadays, the most used method to deploy the calibration comprises the use of images of a planar pattern in different perspectives, in order to extract control points to set up a system of linear equations whose solution represents the camera parameters, followed by an optimization based on the 2D reprojection error. In this work, the ring calibration pattern was chosen because it offers higher accuracy on the detection of control points. Upon application of techniques such as fronto-parallel transformation, iterative refinement of the control points and adaptative segmentation of ellipses, our approach has reached improvements in the result of the calibration process. Furthermore, we proposed extend the optimization model by modifying the objective function, regarding not only the 2D reprojection error but also the 2D collinearity error.
Nagdev, Alok. "Georeferencing digital camera images using internal camera model." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000343.
Full textLuong, Quang-Tuan. "Matrice fondamentale et calibration visuelle sur l'environnement. Vers une plus grande autonomie des système robotiques." Phd thesis, Université Paris Sud - Paris XI, 1992. http://tel.archives-ouvertes.fr/tel-00549134.
Full textAmara, Ashwini. "Object Detection and Tracking Using Uncalibrated Cameras." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1184.
Full textBandarupalli, Sowmya. "Vehicle detection and tracking using wireless sensors and video cameras." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/989.
Full textCastanheiro, Letícia Ferrari. "Geometric model of a dual-fisheye system composed of hyper-hemispherical lenses /." Presidente Prudente, 2020. http://hdl.handle.net/11449/192117.
Full textResumo: A combinação de duas lentes com FOV hiper-hemisférico em posição opostas pode gerar um sistema omnidirecional (FOV 360°) leve, compacto e de baixo custo, como Ricoh Theta S e GoPro Fusion. Entretanto, apenas algumas técnicas e modelos matemáticos para a calibração um sistema com duas lentes hiper-hemisféricas são apresentadas na literatura. Nesta pesquisa, é avaliado e definido um modelo geométrico para calibração de sistemas omnidirecionais compostos por duas lentes hiper-hemisféricas e apresenta-se algumas aplicações com esse tipo de sistema. A calibração das câmaras foi realizada no programa CMC (calibração de múltiplas câmeras) utilizando imagens obtidas a partir de vídeos feitos com a câmara Ricoh Theta S no campo de calibração 360°. A câmara Ricoh Theta S é composto por duas lentes hiper-hemisféricas fisheye que cobrem 190° cada uma. Com o objetivo de avaliar as melhorias na utilização de pontos em comum entre as imagens, dois conjuntos de dados de pontos foram considerados: (1) apenas pontos no campo hemisférico, e (2) pontos em todo o campo de imagem (isto é, adicionar pontos no campo de imagem hiper-hemisférica). Primeiramente, os modelos ângulo equisólido, equidistante, estereográfico e ortogonal combinados com o modelo de distorção Conrady-Brown foram testados para a calibração de um sensor da câmara Ricoh Theta S. Os modelos de ângulo-equisólido e estereográfico apresentaram resultados melhores do que os outros modelos. Portanto, esses dois modelos de projeção for... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The arrangement of two hyper-hemispherical fisheye lenses in opposite position can design a light weight, small and low-cost omnidirectional system (360° FOV), e.g. Ricoh Theta S and GoPro Fusion. However, only a few techniques are presented in the literature to calibrate a dual-fisheye system. In this research, a geometric model for dual-fisheye system calibration was evaluated, and some applications with this type of system are presented. The calibrating bundle adjustment was performed in CMC (calibration of multiple cameras) software by using the Ricoh Theta video frames of the 360° calibration field. The Ricoh Theta S system is composed of two hyper-hemispherical fisheye lenses with 190° FOV each one. In order to evaluate the improvement in applying points in the hyper-hemispherical image field, two data set of points were considered: (1) observations that are only in the hemispherical field, and (2) points in all image field, i.e. adding points in the hyper-hemispherical image field. First, one sensor of the Ricoh Theta S system was calibrated in a bundle adjustment based on the equidistant, equisolid-angle, stereographic and orthogonal models combined with Conrady-Brown distortion model. Results showed that the equisolid-angle and stereographic models can provide better solutions than those of the others projection models. Therefore, these two projection models were implemented in a simultaneous camera calibration, in which the both Ricoh Theta sensors were considered i... (Complete abstract click electronic access below)
Mestre
Jordan, Samuel James. "Projector-Camera Calibration Using Gray Code Patterns." Thesis, 2010. http://hdl.handle.net/1974/5911.
Full textThesis (Master, Electrical & Computer Engineering) -- Queen's University, 2010-06-29 09:33:50.311
Che-YungHung and 洪哲詠. "Camera-assisted Calibration Techniques for Merging Multi-projector Systems." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/97628708470738795905.
Full textLai, Hsin-Yi, and 賴馨怡. "Calibration of camera and projector with applications on 3D scanning." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/a5rtms.
Full text國立交通大學
應用數學系數學建模與科學計算碩士班
105
In this thesis, we propose a 3D point cloud construction algorithm via using projector-camera system. In order to achieve this purpose, the camera and projector property should be accurately estimated by using calibration algorithm. The relationship between camera pixel and projector pixel can be linked by structure light technique. Finally, the object shape can be reconstructed in the form of a point cloud by considering the epipolar geometry.
Sun, Hung-Chi, and 孫宏奇. "Calibration of Multi-views Projector-Camera System by Using Structured Light and Applications in 3D Scan." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/zb4fu3.
Full text國立交通大學
應用數學系數學建模與科學計算碩士班
106
We proposed the calibration algorithms and the 3D reconstruction algorithms for multiple-view projector-camera system with structured light. In order to generate high accuracy calibration data, the projector-camera pair is calibrated by extracting the correspondence of control points between projector and camera with the structured light. 3D Model is reconstructed by computing the depth information with similarity of base line and the distance between two corresponding points in the camera image and projector image. The multiple-view projector-camera system catches the model from an object in different views with turntable . We merge all the data together by iterative closest point algorithm.
Bélanger, Lucie. "Calibration de systèmes de caméras et projecteurs dans des applications de création multimédia." Thèse, 2009. http://hdl.handle.net/1866/3864.
Full textThis thesis focuses on computer vision applications for technological art projects. Camera and projector calibration is discussed in the context of tracking applications and 3D reconstruction in visual arts and performance art. The thesis is based on two collaborations with québécois artists Daniel Danis and Nicolas Reeves. Projective geometry and classical camera calibration techniques, such as planar calibration and calibration from epipolar geometry, are detailed to introduce the techniques implemented in both artistic projects. The project realized in collaboration with Nicolas Reeves consists of calibrating a pan-tilt camera-projector system in order to adapt videos to be projected in real time on mobile cubic screens. To fulfil the project, we used classical camera calibration techniques combined with our proposed camera pose calibration technique for pan-tilt systems. This technique uses elliptic planes, generated by the observation of a point in the scene while the camera is panning, to compute the camera pose in relation to the rotation centre of the pan-tilt system. The project developed in collaboration with Daniel Danis is based on multi-camera calibration. For this studio theatre project, we developed a multi-camera calibration algorithm to be used with a wiimote network. The technique based on epipolar geometry allows 3D reconstruction of a trajectory in a large environment at a low cost. The results obtained from the camera calibration techniques implemented are presented alongside their application in real public performance contexts.
Dhillon, Daljit Singh J. S. "Geometric And Radiometric Estimation In A Structured-Light 3D Scanner." Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/2270.
Full textDraréni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision : application au suivi, au calibrage et à la reconstruction." Thèse, 2010. http://hdl.handle.net/1866/4868.
Full textThe topic of this thesis revolves around three fundamental problems in computer vision; namely, video tracking, camera calibration and shape recovery. The proposed methods are solely based on photometric and geometric constraints found in the images. Video tracking, usually performed on a video sequence, consists in tracking a region of interest, selected manually by an operator. We extend a successful tracking method by adding the ability to estimate the orientation of the tracked object. Furthermore, we consider another fundamental problem in computer vision: cali- bration. Here we tackle the problem of calibrating linear cameras (a.k.a: pushbroom) and video projectors. For the former one we propose a convenient plane-based cali- bration algorithm and for the latter, a calibration algorithm that does not require a physical grid and a planar auto-calibration algorithm. Finally, we pointed our third research direction toward shape reconstruction using coplanar shadows. This technique is known to suffer from a bas-relief ambiguity if no extra information on the scene or light source is provided. We propose a simple method to reduce this ambiguity from four to a single parameter. We achieve this by taking into account the visibility of the light spots in the camera.
Cette thése a été réalisée dans le cadre d'une cotutelle avec l'Institut National Polytechnique de Grenoble (France). La recherche a été effectuée au sein des laboratoires de vision 3D (DIRO, UdM) et PERCEPTION-INRIA (Grenoble).
Bouchard, Louis. "Reconstruction tridimensionnelle pour projection sur surfaces arbitraires." Thèse, 2013. http://hdl.handle.net/1866/9826.
Full textThis thesis falls within the field of computer vision. It focuses on stereoscopic camera calibration, camera-projector matching, 3D reconstruction, projector blending, point cloud meshing, and surface parameterization. Conducted as part of the LightTwist project at the Vision3D laboratory, the work presented in this thesis aims to facilitate video projections on large surfaces of arbitrary shape using more than one projector. This type of projection is often seen in theater, digital arts, and architectural projections. To this end, we begin with the calibration of the cameras, followed by a piecewise 3D reconstruction using an active unstructured light scanning method. An automated alignment and meshing of the partial reconstructions yields a complete 3D model of the projection surface. This thesis then introduces a new approach for the parameterization of 3D models based on an efficient computation of geodesic distances across triangular meshes. The only input required from the user is the manual selection of the boudaries of the projection area on the model. The final parameterization is computed using the geodesic distances obtained for each of the model's vertices. Until now, existing methods did not permit the parameterization of models having a million vertices or more.
Wu, Yen-Nien. "Geometric and Photometric Calibration for Tiling Multiple Projectors with a Pan-Tilt-Zoom Camera." 2004. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1607200400163300.
Full textWu, Yen-Nien, and 吳延年. "Geometric and Photometric Calibration for Tiling Multiple Projectors with a Pan-Tilt-Zoom Camera." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/13351493115037455304.
Full text國立臺灣大學
資訊工程學研究所
92
Over the past few years a considerable number of studies have been made on building large, high-resolution displays. The aim of this thesis is to build a seamless large-scale display system by tiling multiple projectors with the help of a pan-tilt-zoom camera. It adds to the display world by combining different techniques to create a large, high-resolution, and low-cost display system. This research takes two tasks into consideration when producing the seamless display: (1) the geometric calibration and (2) the photometric calibration. Here, I develop a vision-based approach to accomplish these tasks by utilizing a pan-tilt-zoom video camera. Compared to previous work, my method for geometric calibration is more accurate because I have a much higher resolution (because I combine several zoom-in images). And it does not require a camera calibration, nor does it require the 3D geometry estimation of the display surface. As opposed to using the spectroradiometer, an expensive device previously used for color calibration, a camera is used due to its low cost. I take advantage of high dynamic range image to estimate the real color, and with a mapping between it and the colorimeter. With photometric calibration, the color across the different projectors will become uniform. This system has great potential for many applications that require large format and high resolution display, such as scientific visualizations and visual display wall.
Carapinha, Rui Filipe Santos. "Project i-RoCS: dirt detection system based on computer vision." Master's thesis, 2020. http://hdl.handle.net/10773/30482.
Full textO objectivo final do projecto i-RoCs é fornecer uma solução robótica automática e eficiente para a limpeza de pavimentos industriais. A solução integrará algoritmos de visão por computador de última geração para a navegação do robô e para a monitorização do processo de limpeza. A limpeza de superfícies industriais é uma das tarefas mais importantes para a segurança do pessoal de uma fábrica. No pior dos casos, um piso danificado/escorregadio pode levar aos mais variados acidentes. Esta é a principal razão pela qual as tecnologias mais avançadas devem estar envolvidas nesta área. Nesta tese é dado um passo nesse sentido. As câmaras digitais com o uso adequado e os algoritmos adequados podem ser um dos sensores mais ricos que podem ser utilizados no ambiente industrial devido à informação que podem captar. Esta informação é uma conversão do mundo real em informação digital que pode ser processada posteriormente. A partir desta informação os algoritmos de baixo nível de visão por computador podem detectar muitas características tais como cores, linhas, formas, contornos, bordas, entre outros. Nesta tese, é feita uma introdução de tecnologia de ponta para a tarefa de limpeza de uma fábrica. Para tal, apresentamos um estudo sobre a implementação de câmaras e processamento digital de imagem para detetar sujidade em pavimentos industriais. É proposto um método de calibração automática dos parâmetros da câmara para enfrentar o ambiente difícil que pode ser encontrado dentro das fábricas em termos das condições de luz. Desenvolvemos algoritmos de extracção de características de baixo nível a utilizar na deteção de sujidade que obtiveram bons resultados em termos de detecção. No entanto, não são satisfatórios em termos de desempenho se considerarmos que serão aplicados num robô móvel. O último passo foi a implementação de algoritmos baseados no Deep Learning, uma das tecnologias mais promissoras dos últimos anos, utilizada no processamento de imagens. Esta solução proposta é uma rede de segmentação seguida de uma rede de regressão. A segmentação irá classificar os vários tipos de padrões existentes no terreno e a regressão irá produzir o nível de sujidade de cada área.
Mestrado em Engenharia Eletrónica e Telecomunicações
Madeira, Tiago de Matos Ferreira. "Enhancement of RGB-D image alignment using fiducial markers." Master's thesis, 2019. http://hdl.handle.net/10773/29603.
Full textA reconstrução 3D é a criação de modelos tridimensionais a partir da forma e aparência capturadas de objetos reais. É um campo que teve origem em diversos ramos da visão computacional e computação gráfica, e que ganhou grande importância em áreas como a arquitetura, robótica, condução autónoma, medicina e arqueologia. A maioria das tecnologias de aquisição de modelos atuais são baseadas em LiDAR, câmeras RGB-D e abordagens baseadas em imagens, como o SLAM visual. Apesar das melhorias que foram alcançadas, os métodos que dependem de instrumentos profissionais e da sua operação resultam em elevados custos, tanto de capital, como logísticos. Nesta dissertação foi desenvolvido um processo de otimização capaz de melhorar as reconstruções 3D criadas usando uma câmera RGB-D portátil, disponível ao nível do consumidor, de fácil manipulação e que tem uma interface familiar para o utilizador de smartphones, através da utilização de marcadores fiduciais colocados no ambiente. Além disso, uma ferramenta foi desenvolvida para permitir a remoção dos ditos marcadores fiduciais da textura da cena, como um complemento para mitigar uma desvantagem da abordagem adotada, mas que pode ser útil em outros contextos.
Mestrado em Engenharia de Computadores e Telemática
Epstein, Emric. "Utilisation de miroirs dans un système de reconstruction interactif." Thèse, 2004. http://hdl.handle.net/1866/16668.
Full text