Academic literature on the topic 'Camera Projector Calibration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Camera Projector Calibration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Camera Projector Calibration"

1

Motta, Thiago, Manuel Loaiza, Alberto Raposo, and Luciano Soares. "Kinect Projection Mapping." Journal on Interactive Systems 5, no. 3 (December 30, 2014): 1. http://dx.doi.org/10.5753/jis.2014.722.

Full text
Abstract:
Spatial augmented reality allows users to create a projected virtual environment on irregular surfaces. This demands an accurate Camera-Projector calibration process in order to produce precise 3D information to match the real object. This paper presents a framework to process data achieved from a calibration of a Kinect-Projector system in visualization applications, allowing the user to create an augmented reality environment without having extensive process of the Camera-Projector calibration, while maintaining a precise calibration to the projection on irregular surfaces. Additionally, different calibration techniques were evaluated in order to demonstrate the better approaches.
APA, Harvard, Vancouver, ISO, and other styles
2

Yakno, Marlina, Junita Mohamad-Saleh, Mohd Zamri Ibrahim, and W. N. A. W. Samsudin. "Camera-projector calibration for near infrared imaging system." Bulletin of Electrical Engineering and Informatics 9, no. 1 (February 1, 2020): 160–70. http://dx.doi.org/10.11591/eei.v9i1.1697.

Full text
Abstract:
Advanced biomedical engineering technologies are continuously changing the medical practices to improve medical care for patients. Needle insertion navigation during intravenous catheterization process via Near infrared (NIR) and camera-projector is one solution. However, the central point of the problem is the image captured by camera misaligns with the image projected back on the object of interest. This causes the projected image not to be overlaid perfectly in the real-world. In this paper, a camera-projector calibration method is presented. Polynomial algorithm was used to remove the barrel distortion in captured images. Scaling and translation transformations are used to correct the geometric distortions introduced in the image acquisition process. Discrepancies in the captured and projected images are assessed. The accuracy of the image and the projected image is 90.643%. This indicates the feasibility of the captured approach to eliminate discrepancies in the projection and navigation images.
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Guang, Bo Yang Yu, Shu Cai Yang, Li Wen, Wen Fei Dong, and Hui Wang. "The Projector Calibration Based on ZHANG’s Self-Calibration Method." Advanced Materials Research 981 (July 2014): 364–67. http://dx.doi.org/10.4028/www.scientific.net/amr.981.364.

Full text
Abstract:
Projector calibration can be seen as a special case of the camera calibration. It can establish the relationship of the three dimensional space coordinates for points and projector image coordinates for points DMD by using a projector to project coding pattern. In camera calibration, ZHANG’s self-calibration was conducted in the maximum likelihood linear refinement. Operation process takes the lens distortion factors into account finding out the camera internal and external parameters finally. Using this algorithm to the projector calibration can solve the traditional linear calibration algorithm which is complex and poor robustness. Otherwise, it can improve the practicability of calibration method. This method can both calibrate the internal and external parameters of projector, which can solve the problem of independently inside or outside calibration.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Xue Jin, and Cheng Rui Zhang. "A New Camera and Projector Calibration Method Based on Discrete Computation." Applied Mechanics and Materials 229-231 (November 2012): 1171–75. http://dx.doi.org/10.4028/www.scientific.net/amm.229-231.1171.

Full text
Abstract:
This paper presents a novel method for the calibration of camera and projector based on discrete computation, which uses a linear moving chessboard and project plane to get a calibration database for camera and projector. The camera database contains multi-layers data of the captured chessboard corners and their corresponding practical coordinates in the system. Then by using this database and interpolation algorithm, we can calculate the accurate position for each given pixel position in the captured image. After this we change the chessboard with a white scene and project a chessboard image to the plane. By capturing the projected image and computing the practical position for the corners in the image, we can get a relationship between practical coordinates and the projected image pixels. So another database for the project image and the practical coordinates is established. By using these two databases, we can define the light line for the pixels in a camera captured image and projected image, which can be used for 3D scanning system and other industrial systems, like mask-image projection stereo lithography rapid prototyping system.
APA, Harvard, Vancouver, ISO, and other styles
5

Van Crombrugge, Izaak, Rudi Penne, and Steve Vanlanduit. "Extrinsic Camera Calibration with Line-Laser Projection." Sensors 21, no. 4 (February 5, 2021): 1091. http://dx.doi.org/10.3390/s21041091.

Full text
Abstract:
Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Wen Guo, and Shao Jun Duan. "Convenient Calibration Procedure for Structured Light Projection System." Advanced Materials Research 662 (February 2013): 777–80. http://dx.doi.org/10.4028/www.scientific.net/amr.662.777.

Full text
Abstract:
We present a convenient calibration method for structured light projection system. The proposed clibration approach can realize 3D shape measurement without projector calibration, without system calibration, without precise linear z stage to be used, the relative position between camera and projector can be arbitrary, and the only involved device is a plane board. Experiment results validated that the accuracy of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Yu Bao, Bin Liu, and Jun Yi Lin. "A Method of Line Structured Light Vision System Calibration Based on Stereo Vision." Applied Mechanics and Materials 397-400 (September 2013): 1453–58. http://dx.doi.org/10.4028/www.scientific.net/amm.397-400.1453.

Full text
Abstract:
It is a difficult task to get enough numbers of highly accurate control points for projector calibration in line structured light vision system. A new projector calibration method based on binocular stereo vision is proposed in this paper. Two cameras calibration can be done usingtraditional camera calibration method and they composed a binocular stereo vision. In projector calibration procedure, a planar template was located at several different positions in front of the stereo vision system. Two cameras captured the stripe images in each position simultaneously. Every center points of the laser stripe can be used as control points of the projector plane. The 3D coordinate of the stripe center points can be obtained through binocular stereo vision principle easily. So the light plane can be calculated quickly. Experiments were carried out and the result shows that the proposed method is flexible and stable.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Xiao Yang, Xiao Liang Meng, Hai Bin Wu, Xiao Ming Sun, and Li Wang. "Coded-Structured Light System Calibration Using Orthogonal Phase Shift Coding Combined with Zhang’s Method." Advanced Materials Research 981 (July 2014): 348–51. http://dx.doi.org/10.4028/www.scientific.net/amr.981.348.

Full text
Abstract:
In coded-structured light three dimensional system, system calibration plays a vital role for the measurement accuracy. The camera calibration method is very mature, but the study about projector calibration is less. Therefore, this paper proposes a projector calibration method with simple calibration process and high accuracy. This method combines the Zhang’?s plane model calibration method with orthogonal phase shift coding. In calibration process, this paper uses phase shift coding pattern to establish the relationship of projector image and camera corner point coordinates. According to the image coordinates in the projector’?s perspective, we program and calculate the projector’?s internal and external parameters matrix based on the Zhang’?s plane model calibration toolbox. The results show that the proposed method is simple and flexible, the maximum relative error of the calibration parameters is 0.03%, and it meets the requirements of system calibration in medical or industrial fields.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Tao, Yue Liu, and You Lu. "A Rapid Auto-Caliberation Method in Projector-Camera System." Applied Mechanics and Materials 58-60 (June 2011): 2308–13. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.2308.

Full text
Abstract:
A precise, fast, and fully automatic calibration method is proposed to address the shortcomings in currently used large-scale interactive camera-projector systems. These shortcomings include a small number of calibration points used in manual calibration, large errors, huge time consumption, and lack of professional quality operations. The proposed method applies mechanical wavelength switching in the projected image to capture multi-regional vertices. The co-linearity of each point in the projected images is calculated to determine the actual location of the interactive points in the projected image. The point-by-point computation adopted in the method promotes the automatic elimination of uncorrectable systematic errors in large-scale optical devices. The automatic error elimination not only increases the accuracy of the interactive system and reduces the complexity of system installation, but also increases the flexibility of the interactive system.
APA, Harvard, Vancouver, ISO, and other styles
10

Portalés, Cristina, Emilio Ribes-Gómez, Begoña Pastor, and Antonio Gutiérrez. "Calibration of a camera-projector monochromatic system." Photogrammetric Record 30, no. 149 (March 2015): 82–99. http://dx.doi.org/10.1111/phor.12094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Camera Projector Calibration"

1

Hilario, Maria Nadia. "Occlusion detection in front projection environments based on camera-projector calibration." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83866.

Full text
Abstract:
Camera-projector systems are increasingly being used to create large displays for data visualization, immersive environments and augmented reality. Front projection displays, however, suffer from occlusions, resulting in shadows and light being cast, respectively, onto the display and the user. Researchers have begun addressing the issue of occlusion detection to enable dynamic shadow removal and to facilitate automatic user sensing in interactive display applications. A camera-projector system for occlusion detection in front projection environments is presented. The approach is based on offline, camera projector geometric and color calibration, which then enable online, dynamic camera view synthesis of arbitrary projected scenes. Occluded display regions are detected through pixel-wise differencing between predicted and captured camera images. The implemented system is demonstrated for dynamic shadow detection and removal using a dually overlapped projector display.
APA, Harvard, Vancouver, ISO, and other styles
2

Tennander, David. "Automatic Projector Calibration for Curved Surfaces Using an Omnidirectional Camera." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209675.

Full text
Abstract:
This master’s thesis presents one approach to remove distortions generated by projecting onto non flat surfaces. By using an omnidirectional camera a full 360 dome could be calibrated and the corresponding angles between multiple projections could be calculated. The camera was modelled with the Unified Projection Model allowing any omnidirectional camera system to be used. Surface geometry was captured by using Gray code patterns, the optimal image centre was calculated as an quadratic optimisation problem and in the end a Spline surface countering the distortions was generated by using the FAST-LTS regression algorithm. The developed system used a RICOH THETA S camera calibrated by the omnidir module in openCV. A desirable result was achieved and during use of overlapping projectors a maximum error of 0.5° was measured. Testing indicates part of the error could have been introduced in the evaluation measurements. The resulting application is seen as a success and will be used by ÅF Technology AB during calibration of flight simulators.
Denna rapport presenterar en metod för att motverka de distorsioner som uppkommer när en bild projeseras på en icke plan yta. Genom att använda en omnidirectional kamera kan en omslutande dome upplyst av flertalet projektorer bli kalibrerad. Kameran modellerades med The Unified Projection Model då modellen går att anpassa för ett stort antal kamerasystem. Projektorernas bild på ytan lästes av genom att använda Gray kod och sedan beräknades den optimala mittpunkten för den kalibrerade bilden genom att numeriskt lösa ett kvadratiskt NLP problem. Till slut skapas en Spline yta som motvärkar projektionsförvrängningen genom FAST-LTS regression. I den experimentella uppställningen användes en RICOH THETA S kamera som kalibrerades men omnidir modulen i openCV. Ett enligt författarna lyckat resultat uppnåddes och vid överlappning av flertalet projektorer så mättes ett maximalt fel på 0.5° upp. Vidare mätningar antyder att delar av detta fel uppkommit på grund av saknad noggrannhet i utrustningen under evalueringsfasen. Resultatet ses som lyckat och den utvecklade applikationen kommer att användas av ÅF Technology AB vid deras calibrering av flygsimulatorer.
APA, Harvard, Vancouver, ISO, and other styles
3

Korostelev, Michael. "Performance Evaluation for Full 3D Projector Calibration Methods in Spatial Augmented Reality." Master's thesis, Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/213116.

Full text
Abstract:
Electrical and Computer Engineering
M.S.E.E.
Spatial Augment Reality (SAR) has presented itself to be an interesting tool for not only interesting ways to visualize information but to develop creative works in performance arts. The main challenge is to determine accurate geometry of a projection space and determine an efficient and effective way to project digital media and information to create an augmented space. In our previous implementation of SAR, we developed a projector-camera calibration approach using infrared markers. However, the projection suffers severe distortion due to the lack of depth information in the projection space. For this research, we propose to develop a RGBD sensor - projector system to replace our current projector-camera SAR system. Proper calibration between the camera or sensor and projector links vision to projection, answering the question of which point in camera space maps to what point in the space of projection. Calibration will resolve the problem of capturing the geometry of the space and allow us to accurately augment the surfaces of volumetric objects and features. In this work three calibration methods are examined for performance and accuracy. Two of these methods are existing adaptations of 2D camera - projector calibrations (calibration using arbitrary planes and ray-plane intersection) with our third proposed novel technique which utilizes point cloud information from the RGBD sensor directly. Through analysis and evaluation using re-projection error, results are presented, identifying the proposed method as practical and robust.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
4

Mosnier, Jérémie. "Etalonnage d'un système de lumière structurée par asservissement visuel." Thesis, Clermont-Ferrand 2, 2011. http://www.theses.fr/2011CLF22194.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre d'un projet national nommé SRDViand dont le but fut de développer un système robotisé pour le désossage et la découpe des animaux de boucherie. Afin de déterminer les trajectoires de découpe de manière intelligente, un système de lumière structurée a été développé. Il se réfère à des systèmes de vision qui utilisent des modèles de projection de lumière pour des tâches de reconstruction 3D. Afin d'obtenir les meilleurs résultats, la définition d'une nouvelle méthode d'étalonnage pour les systèmes de lumière structurée a été établie. Basé sur un large état de l'art et également sur la proposition d'une classification de ces méthodes, il a été proposé d'étalonner une paire caméra projecteur en utilisant l'asservissement visuel. La validité et les résultats de cette méthode ont été éprouvés sur la base de nombreux tests expérimentaux menés dans le cadre du projet SRDViand. Suite à l'élaboration de cette méthode, un prototype permettant la découpe des bovins a été réalisé
This thesis is part of a national project named SRDViand whose aim was to develop a robotic system for the deboning and cutting of animals meat. To determine the cut paths, a structured light system has been developed. It refers to vision systems that use light projection models for 3D reconstruction tasks. To achieve best results, the definition of a new calibration method for structured light systems was established . Based on a large state of the art and also with a proposed classification of these methods, it has been proposed to calibrate a camera projector pair using visual servoing . The validity and the results of this method were tested on the basis of numerous experimental tests conducted under the SRDViand project. Following the development of this method, a prototype bovine cutting was performed
APA, Harvard, Vancouver, ISO, and other styles
5

Walter, Viktor. "Projekce dat do scény." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-240823.

Full text
Abstract:
The focus of this thesis is the cooperation of cameras and projectors in projection of data into a scene. It describes the means and theory necessary to achieve such cooperation, and suggests tasks for demonstration. A part of this project is also a program capable of using a camera and a projector to obtain necessary parameters of these devices. The program can demonstrate the quality of this calibration by projecting a pattern onto an object according to its current pose, as well as reconstruct the shape of an object with structured light. The thesis also describes some challenges and observations from development and testing of the program.
APA, Harvard, Vancouver, ISO, and other styles
6

Malík, Dalibor. "Zpracování dat z termokamery." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219683.

Full text
Abstract:
The aim of this master’s thesis is to give information about the thermo camera measurement with error minimization. The basic concepts of thermography are explained with an implementation of postprocesing technique which uses graphically modified thermogram back projected to the scene. This is closely related to the scene design, calibration of thermal camera with projector, image rectification, thermogram processing with highlighting of interesting information and implementation of control elements as the user interface. The results obtained are analyzed and re-evaluated.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Liming. "Recalage robuste à base de motifs de points pseudo aléatoires pour la réalité augmentée." Thesis, Ecole centrale de Nantes, 2016. http://www.theses.fr/2016ECDN0025.

Full text
Abstract:
La Réalité Augmentée (RA) vise à afficher des informations numériques virtuelles sur des images réelles. Le recalage est important, puisqu’il permet d'aligner correctement les objets virtuels dans le monde réel. Contrairement au tracking qui recale en utilisant les informations de l’image précédente, la localisation à grande échelle (wide baseline localization) calcule la solution en utilisant uniquement les informations présentes dans l’image courante. Il permet ainsi de trouver des solutions initiales au problème de recalage (initialisation) et, n’est pas sujet aux problèmes de « perte de tracking ». Le problème du recalage en RA est relativement bien étudié dans la littérature, mais les méthodes existantes fonctionnent principalement lorsque la scène augmentée présente des textures. Pourtant, pour le recalage avec les objets peu ou pas texturés, il est possible d’utiliser leurs informations géométriques qui représentent des caractéristiques plus stables que les textures. Cette thèse s’attache au problème de recalage basé sur des informations géométriques, et plus précisément sur les points. Nous proposons deux nouvelles méthodes de recalage de points (RRDM et LGC) robustes et rapides. LGC est une amélioration de la méthode RRDM et peut mettre en correspondance des ensembles de motifs de points 2D ou 3D subissant une transformation dont le type est connu. LGC présente un comportement linéaire en fonction du nombre de points, ce qui permet un tracking en temps-réel. La pertinence de LGC a été illustrée en développant une application de calibration de système projecteur-caméra dont les résultats sont comparables avec l’état de l’art tout en présentant des avantages pour l’utilisateur en termes de taille de mire de calibration
Registration is a very important task in Augmented Reality (AR). It provides the spatial alignment between the real environment and virtual objects. Unlike tracking (which relies on previous frame information), wide baseline localization finds the correct solution from a wide search space, so as to overcome the initialization or tracking failure problems. Nowadays, various wide baseline localization methods have been applied successfully. But for objects with no or little texture, there is still no promising method. One possible solution is to rely on the geometric information, which sometimes does not vary as much as texture or color. This dissertation focuses on new wide baseline localization methods entirely based on geometric information, and more specifically on points. I propose two novel point pattern matching algorithms, RRDM and LGC. Especially, LGC registers 2D or 3D point patterns under any known transformation type and supports multipattern recognitions. It has a linear behavior with respect to the number of points, which allows for real-time tracking. It is applied to multi targets tracking and augmentation, as well as to 3D model registration. A practical method for projector-camera system calibration based on LGC is also proposed. It can be useful for large scale Spatial Augmented Reality (SAR). Besides, I also developed a method to estimate the rotation axis of surface of revolution quickly and precisely on 3D data. It is integrated in a novel framework to reconstruct the surface of revolution on dense SLAM in real-time
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, Roger Correia Pinheiro. "Desenvolvimento e análise de um digitalizador câmera-projetor de alta definição para captura de geometria e fotometria." Universidade Federal de Juiz de Fora (UFJF), 2011. https://repositorio.ufjf.br/jspui/handle/ufjf/3515.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-02T14:44:36Z No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:52:42Z (GMT) No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5)
Made available in DSpace on 2017-03-06T19:52:42Z (GMT). No. of bitstreams: 1 rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5) Previous issue date: 2011-08-26
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Um sistema câmera-projetor é capaz de capturar informação geométrica tridimensional de objetos e ambientes do mundo real. A captura de geometria em tal sistema baseia-se na projeção de luz estruturada sobre um objeto através do projetor, e na captura da cena modulada através da câmera. Com o sistema previamente calibrado, a deformação da luz projetada causada pelo objeto fornece a informação necessária para reconstruir a geometria do mesmo por meio de triangulação. Este trabalho descreve o desenvolvimento de um digitalizador câmera-projetor de alta definição (com resoluções de até 1920x1080 e 1280x720); são detalhadas as etapas e processos que conduzem à reconstrução de geometria, como calibração câmera-projetor, calibração de cores, processamento da imagem capturada e triangulação. O digitalizador desenvolvido utiliza a codificação de luz estruturada (b; s)-BCSL, que emprega a projeção de uma sequência de faixas verticais coloridas sobre a cena. Este esquema de codificação flexível oferece um número variado de faixas para projeção: quanto maior o número de faixas, mais detalhada a geometria capturada. Um dos objetivos deste trabalho é estimar o número limite de faixas (b,s)-BCSL possível dentro das resoluções atuais de vídeo de alta definição. Este número limite é aquele que provê reconstrução densa da geometria alvo, e ao mesmo tempo possui baixo nível de erro. Para avaliar a geometria reconstruída pelo digitalizador para os diversos números de faixas, é proposto um protocolo para avaliação de erro. O protocolo desenvolvido utiliza planos como objetos para mensurar a qualidade de reconstrução geométrica. A partir da nuvem de pontos gerada pelo digitalizador, a equação do plano para a mesma é estimada por meio de mínimos quadrados. Para um número fixo de faixas, são feitas cinco digitalizações independentes do plano: cada digitalização leva a uma equação; também é computado o plano médio, estimado a partir da união das cinco nuvens de pontos. Uma métrica de distância no espaço projetivo é usada para avaliar a precisão e a acurácia de cada número de faixas projetados. Além da avaliação quantitativa, a geometria de vários objetos é apresentada para uma avaliação qualitativa. Os resultados demonstram que a quantidade de faixas limite para vídeos de alta resolução permite uma grande densidade de pontos mesmo em superfícies com alta variação de cores.
A camera-projector system is capable of capturing three-dimensional geometric information of objects and real-world environments. The capture of geometry in such system is based on the projection of structured light over an object by the projector, and the capture of the modulated scene through the camera. With a calibrated system, the deformation of the projected light caused by the object provides the information needed to reconstruct its geometry through triangulation. The present work describes the development of a high definition camera-projector system (with resolutions up to 1920x1080 and 1280x720). The steps and processes that lead to the reconstruction of geometry, such as camera-projector calibration, color calibration, image processing and triangulation, are detailed. The developed scanner uses the (b; s)-BCSL structured light coding, which employs the projection of a sequence of colored vertical stripes on the scene. This coding scheme offers a flexible number of stripes for projection: the higher the number of stripes, more detailed is the captured geometry. One of the objectives of this work is to estimate the limit number of (b; s)-BCSL stripes possible within the current resolutions of high definition video. This limit number is the one that provides dense geometry reconstruction, and at the same has low error. To evaluate the geometry reconstructed by the scanner for a different number of stripes, we propose a protocol for error measurement. The developed protocol uses planes as objects to measure the quality of geometric reconstruction. From the point cloud generated by the scanner, the equation for the same plane is estimated by least squares. For a fixed number of stripes, five independent scans are made for the plane: each scan leads to one equation; the median plane, estimated from the union of the five clouds of points, is also computed. A distance metric in the projective space is used to evaluate the precision and the accuracy of each number of projected stripes. In addition to the quantitative evaluation, the geometry of many objects are presented for qualitative evaluation. The results show that the limit number of stripes for high resolution video allows high density of points even on surfaces with high color variation.
APA, Harvard, Vancouver, ISO, and other styles
9

Zahrádka, Jiří. "Rozšířené uživatelské rozhraní." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-236929.

Full text
Abstract:
This thesis falls into a field of user interface design. It focuses on tangible user interfaces which utilize a camera and projector to augment physical objects with a digital information. It also includes description of calibration of those devices. The primary object of this thesis is the implementation of an augmented user interface for application windows management. The system consists of a stationary camera, overhead projector and movable tangible objects - boards. The boards are equipped with fiducial markers, in order to be tracked in a camera image. The projector displays the conventional desktop onto the table and the tangible objects. For example, application windows can be projected onto some boards, while the windows move and rotate simultaneously with the boards.
APA, Harvard, Vancouver, ISO, and other styles
10

PINHEIRO, SASHA NICOLAS DA ROCHA. "CAMERA CALIBRATION USING FRONTO PARALLEL PROJECTION AND COLLINEARITY OF CONTROL POINTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28011@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Imprescindível para quaisquer aplicações de visão computacional ou realidade aumentada, a calibração de câmera é o processo no qual se obtém os parâmetros intrínsecos e extrínsecos da câmera, tais como distância focal, ponto principal e valores que mensuram a distorção ótica da lente. Atualmente o método mais utilizado para calibrar uma câmera envolve o uso de imagens de um padrão planar em diferentes perspectivas, a partir das quais se extrai pontos de controle para montar um sistema de equações lineares cuja solução representa os parâmetros da câmera, que são otimizados com base no erro de reprojeção 2D. Neste trabalho, foi escolhido o padrão de calibração aneliforme por oferecer maior precisão na detecção dos pontos de controle. Ao aplicarmos técnicas como transformação frontal-paralela, refinamento iterativo dos pontos de controle e segmentação adaptativa de elipses, nossa abordagem apresentou melhoria no resultado do processo de calibração. Além disso, propomos estender o modelo de otimização ao redefinir a função objetivo, considerando não somente o erro de reprojeção 2D, mas também o erro de colinearidade 2D.
Crucial for any computer vision or augmented reality application, the camera calibration is the process in which one gets the intrinsics and the extrinsics parameters of a camera, such as focal length, principal point and distortions values. Nowadays, the most used method to deploy the calibration comprises the use of images of a planar pattern in different perspectives, in order to extract control points to set up a system of linear equations whose solution represents the camera parameters, followed by an optimization based on the 2D reprojection error. In this work, the ring calibration pattern was chosen because it offers higher accuracy on the detection of control points. Upon application of techniques such as fronto-parallel transformation, iterative refinement of the control points and adaptative segmentation of ellipses, our approach has reached improvements in the result of the calibration process. Furthermore, we proposed extend the optimization model by modifying the objective function, regarding not only the 2D reprojection error but also the 2D collinearity error.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Camera Projector Calibration"

1

Martynov, Ivan, Joni-Kristian Kamarainen, and Lasse Lensu. "Projector Calibration by “Inverse Camera Calibration”." In Image Analysis, 536–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21227-7_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abe, Daisuke, Takayuki Okatani, and Koichiro Deguchi. "Flexible Online Calibration for a Mobile Projector-Camera System." In Computer Vision – ACCV 2010, 565–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19282-1_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lang, Jonas, and Thomas Schlegl. "Camera-Projector Calibration - Methods, Influencing Factors and Evaluation Using a Robot and Structured-Light 3D Reconstruction." In Intelligent Robotics and Applications, 413–27. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43518-3_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mozerov, Mikhail, Ariel Amato, Murad Haj, and Jordi Gonzàlez. "A Simple Method of Multiple Camera Calibration for the Joint Top View Projection." In Advances in Soft Computing, 164–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-75175-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, D., and J. Kofman. "Error compensated camera-projector calibration in shape measurement." In High Value Manufacturing: Advanced Research in Virtual and Rapid Prototyping, 441–46. CRC Press, 2013. http://dx.doi.org/10.1201/b15961-81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bergamasco, Filippo, Andrea Albarelli, and Andrea Torsello. "A Practical Setup for Projection-Based Augmented Maps." In Advanced Research and Trends in New Technologies, Software, Human-Computer Interaction, and Communicability, 13–22. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4490-8.ch002.

Full text
Abstract:
Projected Augmented Reality is a human-computer interaction scenario where synthetic data, rather than being rendered on a display, are directly projected on the real world. Differening from screen-based approaches, which only require the pose of the camera with respect to the world, this setup poses the additional hurdle of knowing the relative pose between capturing and projecting devices. In this chapter, the authors propose a thorough solution that addresses both camera and projector calibration using a simple fiducial marker design. Specifically, they introduce a novel Augmented Maps setup where the user can explore geographically located information by moving a physical inspection tool over a printed map. Since the tool presents both a projection surface and a 3D-localizable marker, it can be used to display suitable information about the area that it covers. The proposed setup has been evaluated in terms of accuracy of the calibration and ease of use declared by the users.
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Jwu-Sheng, and Yung-Jung Chang. "Self-Calibration of Eye-to-Hand and Workspace for Mobile Service Robot." In Service Robots and Robotics, 229–46. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0291-5.ch013.

Full text
Abstract:
The geometrical relationships among robot arm, camera, and workspace are important to carry out visual servo tasks. For industrial robots, the relationships are usually fixed and well calibrated by experienced operators. However, for service robots, particularly in mobile applications, the relationships might be changed. For example, when a mobile robot attempts to use the visual information from environmental cameras to perform grasping, it is necessary to know the relationships before taking actions. Moreover, the calibration should be done automatically. This chapter proposes a self-calibration method using a laser distance sensor mounted on the robot arm. The advantage of the method, as compared with pattern-based one, is that the workspace coordinate is also obtained at the same time using the projected laser spot. Further, it is not necessary for the robot arm to enter the view scope of the camera for calibration. This increases the safety when the workspace is unknown initially.
APA, Harvard, Vancouver, ISO, and other styles
8

Troll, Péter, Károly Szipka, and Andreas Archenti. "Indoor Localization of Quadcopters in Industrial Environment." In Advances in Transdisciplinary Engineering. IOS Press, 2020. http://dx.doi.org/10.3233/atde200183.

Full text
Abstract:
The research work in this paper was carried out to reach advanced positioning capabilities of unmanned aerial vehicles (UAVs) for indoor applications. The paper includes the design of a quadcopter and the implementation of a control system with the capability to position the quadcopter indoor using onboard visual pose estimation system, without the help of GPS. The project also covered the design and implementation of quadcopter hardware and the control software. The developed hardware enables the quadcopter to raise at least 0.5kg additional payload. The system was developed on a Raspberry single-board computer in combination with a PixHawk flight controller. OpenCV library was used to implement the necessary computer vision. The Open-source software-based solution was developed in the Robotic Operating System (ROS) environment, which performs sensor reading and communication with the flight controller while recording data about its operation and transmits those to the user interface. For the vision-based position estimation, pre-positioned printed markers were used. The markers were generated by ArUco coding, which exactly defines the current position and orientation of the quadcopter, with the help of computer vision. The resulting data was processed in the ROS environment. LiDAR with Hector SLAM algorithm was used to map the objects around the quadcopter. The project also deals with the necessary camera calibration. The fusion of signals from the camera and from the IMU (Inertial Measurement Unit) was achieved by using Extended Kalman Filter (EKF). The evaluation of the completed positioning system was performed with an OptiTrack optical-based external multi-camera measurement system. The introduced evaluation method has enough precision to be used to investigate the enhancement of positioning performance of quadcopters, as well as fine-tuning the parameters of the used controller and filtering approach. The payload capacity allows autonomous material handling indoors. Based on the experiments, the system has an accurate positioning system to be suitable for industrial application.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Camera Projector Calibration"

1

Fleischmann, Oliver, and Reinhard Koch. "Fast projector-camera calibration for interactive projection mapping." In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7900226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Liming, Jean-Marie Normand, and Guillaume Moreau. "Practical and Precise Projector-Camera Calibration." In 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2016. http://dx.doi.org/10.1109/ismar.2016.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sakaue, Fumihiko, and Jun Sato. "Calibration of projector-camera systems from virtual mutual projection." In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tao, Jun. "Slide projector calibration based on calibration of digital camera." In International Symposium on Multispectral Image Processing and Pattern Recognition, edited by S. J. Maybank, Mingyue Ding, F. Wahl, and Yaoting Zhu. SPIE, 2007. http://dx.doi.org/10.1117/12.774689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bevilacqua, M., C. Liguori, and A. Paolillo. "Stereo calibration for a camera - projector pair." In 2010 IEEE Instrumentation & Measurement Technology Conference Proceedings. IEEE, 2010. http://dx.doi.org/10.1109/imtc.2010.5488055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moreno, Daniel, and Gabriel Taubin. "Simple, Accurate, and Robust Projector-Camera Calibration." In 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT). IEEE, 2012. http://dx.doi.org/10.1109/3dimpvt.2012.77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Horbach, Jan W., and Thao Dang. "Metric projector camera calibration for measurement applications." In Optics East 2006, edited by Peisen S. Huang. SPIE, 2006. http://dx.doi.org/10.1117/12.686000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bevilacqua, M., G. Di Leo, M. Landi, and A. Paolillo. "Self-calibration for a camera-projector pair." In SPIE Optical Metrology, edited by Fabio Remondino and Mark R. Shortis. SPIE, 2011. http://dx.doi.org/10.1117/12.889517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Amano, Toshiyuki. "Projection Center Calibration for a Co-located Projector Camera System." In 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2014. http://dx.doi.org/10.1109/cvprw.2014.72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Garcia, Ricardo R., and Avideh Zakhor. "Geometric calibration for a multi-camera-projector system." In 2013 IEEE Workshop on Applications of Computer Vision (WACV). IEEE, 2013. http://dx.doi.org/10.1109/wacv.2013.6475056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography