To see the other types of publications on this topic, follow the link: Camera calibration.

Dissertations / Theses on the topic 'Camera calibration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Camera calibration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Guoqiang. "Camera network calibration." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37011844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Guoqiang, and 張國強. "Camera network calibration." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37011844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

O'Kennedy, Brian James. "Stereo camera calibration." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/53063.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: We present all the components needed for a fully-fledged stereo vision system, ranging from object detection through camera calibration to depth perception. We propose an efficient, automatic and practical method to calibrate cameras for use in 3D machine vision metrology. We develop an automated stereo calibration system that only requires a series of views of a manufactured calibration object in unknown positions. The system is tested against real and synthetic data, and we investigate the robustness of the proposed method compared to standard calibration practice. All the aspects of 3D stereo reconstruction is dealt with and we present the necessary algorithms to perform epipolar rectification on images as well as solving the correspondence and triangulation problems. It was found that the system performs well even in the presence of noise, and calibration is easy and requires no specialist knowledge.
AFRIKAANSE OPSOMMING: Ons beskryf al die komponente van 'n omvattende stereo visie sisteem. Die kern van die sisteem is 'n effektiewe, ge-outomatiseerde en praktiese metode om kameras te kalibreer vir gebruik in 3D rekenaarvisie. Ons ontwikkel 'n outomatiese, stereo kamerakalibrasie sisteem wat slegs 'n reeks beelde van 'n kalibrasie voorwerp in onbekende posisies vereis. Die sisteem word getoets met reële en sintetiese data, en ons vergelyk die robuustheid van die metode met die standaard algoritmes. Al die aspekte van die 3D stereo rekonstruksie word behandel en ons beskryf die nodige algoritmes om epipolêre rektifikasie op beelde te doen sowel as metodes om die korrespondensie- en diepte probleme op te los. Ons wys dat die sisteem goeie resultate lewer in die aanwesigheid van ruis en dat kamerakalibrasie outomaties kan geskied sonder dat enige spesialis kennis benodig word.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Hui. "Camera calibration from silhouettes." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37743752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tang, Zhongwei. "High precision camera calibration." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00675484.

Full text
Abstract:
The thesis focuses on precision aspects of 3D reconstruction with a particular emphasis on camera distortion correction. The causes of imprecisions in stereoscopy can be found at any step of the chain. The imprecision caused in a certain step will make useless the precision gained in the previous steps, then be propagated, amplified or mixed with errors in the following steps, finally leading to an imprecise 3D reconstruction. It seems impossible to directly improve the overall precision of a reconstruction chain leading to final imprecise 3D data. The appropriate approach to obtain a precise 3D model is to study the precision of every component. A maximal attention is paid to the camera calibration for three reasons. First, it is often the first component in the chain. Second, it is by itself already a complicated system containing many unknown parameters. Third, the intrinsic parameters of a camera only need to be calibrated once, depending on the camera configuration (and at constant temperature). The camera calibration problem is supposed to have been solved since years. Nevertheless, calibration methods and models that were valid for past precision requirements are becoming unsatisfying for new digital cameras permitting a higher precision. In our experiments, we regularly observed that current global camera methods can leave behind a residual distortion error as big as one pixel, which can lead to distorted reconstructed scenes. We propose two methods in the thesis to correct the distortion with a far higher precision. With an objective evaluation tool, it will be shown that the finally achievable correction precision is about 0.02 pixels. This value measures the average deviation of an observed straight line crossing the image domain from its perfectly straight regression line. High precision is also needed or desired for other image processing tasks crucial in 3D, like image registration. In contrast to the advance in the invariance of feature detectors, the matching precision has not been studied carefully. We analyze the SIFT method (Scale-invariant feature transform) and evaluate its matching precision. It will be shown that by some simple modifications in the SIFT scale space, the matching precision can be improved to be about 0.05 pixels on synthetic tests. A more realistic algorithm is also proposed to increase the registration precision for two real images when it is assumed that their transformation is locally smooth. A multiple-image denoising method, called ''burst denoising'', is proposed to take advantage of precise image registration to estimate and remove the noise at the same time. This method produces an accurate noise curve, which can be used to guide the denoising by the simple averaging and classic block matching method. ''burst denoising'' is particularly powerful to recover fine non-periodic textured part in images, even compared to the best state of the art denoising method.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Hui, and 張慧. "Camera calibration from silhouettes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37743752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Frahm, Jan M. [Verfasser]. "Camera Self-Calibration with Known Camera Orientation / Jan M Frahm." Aachen : Shaker, 2005. http://d-nb.info/1186580186/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

de, Laval Astrid. "Online Calibration of Camera Roll Angle." Thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94219.

Full text
Abstract:
Modern day cars are often equipped with a vision system that collects informa- tion about the car and its surroundings. Camera calibration is extremely impor- tant in order to maintain high accuracy in an automotive safety applications. The cameras are calibrated offline in the factory, however the mounting of the camera may change slowly over time. If the angles of the actual mounting of the cam- era are known compensation for the angles can be done in software. Therefore, online calibration is desirable. This master’s thesis describes how to dynamically calibrate the roll angle. Two different methods have been implemented and compared.The first detects verti- cal edges in the image, such as houses and lamp posts. The second one method detects license plates on other cars in front of the camera in order to calculate the roll angle. The two methods are evaluated and the results are discussed. The results of the methods are very varied, and the method that turned out to give the best results was the one that detects vertical edges.
APA, Harvard, Vancouver, ISO, and other styles
9

Duan, Wenting. "Vanishing points detection and camera calibration." Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.548477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Peng, Zhan. "Direct camera calibration from 'Plane+Parallax'." Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Abdullah, Junaidi. "Camera self-calibration for augmented reality." Thesis, University of Southampton, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

SZENBERG, FLAVIO. "SCENE TRACKING WITH AUTOMATIC CAMERA CALIBRATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2001. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=6519@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
É cada vez mais comum, na transmissão de eventos esportivos pelas emissoras de televisão, a inserção, em tempo real, de elementos sintéticos na imagem, como anúncios, marcações no campo, etc. Geralmente, essa inserção é feita através do emprego de câmeras especiais, previamente calibradas e dotadas de dispositivos que registram seu movimento e a mudança de seus parâmetros. De posse destas informações, é simples inserir novos elementos na cena com a projeção apropriada. Nesta tese, é apresentado um algoritmo para recuperar, em tempo real e sem utilizar qualquer informação adicional, a posição e os parâmetros da câmera em uma seqüência de imagens contendo a visualização de modelos conhecidos. Para tal, é explorada a existência, nessas imagens, de segmentos de retas que compõem a visualização do modelo cujas posições são conhecidas no mundo tridimensional. Quando se trata de uma partida de futebol, por exemplo, o modelo em questão é composto pelo conjunto das linhas do campo, segundo as regras que definem sua geometria e dimensões. Inicialmente, são desenvolvidos métodos para a extração de segmentos de retas longos da primeira imagem. Em seguida é localizada uma imagem do modelo no conjunto desses segmentos com base em uma árvore de interpretação. De posse desse reconhecimento, é feito um reajuste nos segmentos que compõem a visualização do modelo, sendo obtidos pontos de interesse que são repassados a um procedimento capaz de encontrar a câmera responsável pela visualização do modelo. Para a segunda imagem da seqüência em diante, apenas uma parte do algoritmo é utilizada, levando em consideração a coerência entre quadros, a fim de aumentar o desempenho e tornar possível o processamento em tempo real. Entre diversas aplicações que podem ser empregadas para comprovar o desempenho e a validade do algoritmo proposto, está uma que captura imagens através de uma câmera para demonstrar o funcionamento do algoritmo on line. A utilização de captura de imagens permite testar o algoritmo em inúmeros casos, incluindo modelos e ambientes diferentes.
In the television casting of sports events, it has become very common to insert synthetic elements to the images in real time, such as adds, marks on the field, etc. Usually, this insertion is made using special cameras, previously calibrated and provided with features that record their movements and parameter changes. With such information, inserting new objects to the scene with the adequate projection is a simple task. In the present work, we will introduce an algorithm to retrieve, in real time and using no additional information, the position and parameters of the camera in a sequence of images containing the visualization of previously-known models. For such, the method explores the existence in these images of straight-line segments that compose the visualization of the model whose positions are known in the three-dimensional world. In the case of a soccer match, for example, the respective model is composed by the set of field lines determined by the rules that define their geometry and dimensions. Firstly, methods are developed to extract long straight- line segments from the first image. Then an image of the model is located in the set formed by such segments based on an interpretation tree. With such information, the segments that compose the visualization of the model are readjusted, resulting in the obtainment of interest points which are then passed to a proceeding able to locate the camera responsible for the model`s visualization. For the second image on, only a part of the algorithm is used, taking into account the coherence between the frames, with the purpose of improving performance to allow real-time processing. Among several applications that can be employed to evaluate the performance and quality of the proposed method, there is one that captures images with a camera to show the on-line functioning of the algorithm. By using image capture, we can test the algorithm in a great variety of instances, including different models and environments.
APA, Harvard, Vancouver, ISO, and other styles
13

Takahashi, Kosuke. "Camera Calibration Based on Mirror Reflections." Kyoto University, 2018. http://hdl.handle.net/2433/232407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Alturki, Abdulrahman S. "Principal Point Determination for Camera Calibration." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1500326474390507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

McLemore, Donald Rodney Jr. "Layered Sensing Using Master-Slave Cameras." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1253565440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Stark, Per. "Machine vision camera calibration and robot communication." Thesis, University West, Department of Technology, Mathematics and Computer Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-1351.

Full text
Abstract:

This thesis is a part of a larger project included in the European project, AFFIX. The reason for the project is to try to develop a new method to assemble an aircraft engine part so that the weight and manufacturing costs are reduced. The proposal is to weld sheet metal parts instead of using cast parts. A machine vision system is suggested to be used in order to detect the joints for the weld assembly operation of the sheet metal. The final system aims to locate a hidden curve on an object. The coordinates for the curve are calculated by the machine vision system and sent to a robot. The robot should create and follow a path by using the coordinates. The accuracy for locating the curve to perform an approved weld joint must be within +/- 0.5 mm. This report investigates the accuracy of the camera calibration and the positioning of the robot. It also brushes the importance of good lightning when obtaining images for a vision system and the development for a robot program that receives these coordinates and transform them into robot movements are included. The camera calibration is done in a toolbox for MatLab and it extracts the intrinsic camera parameters such as the distance between the centre of the lens and the optical detector in the camera: f, lens distortion parameters and principle point. It also returns the location of the camera and orientation at each obtained image during the calibration, the extrinsic parameters. The intrinsic parameters are used when translating between image coordinates and camera coordinates and the extrinsic parameters are used when translating between camera coordinates and world coordinates. The results of this project are a transformation matrix that translates the robots position into the cameras position. It also contains a robot program that can receive a large number of coordinates, store them and create a path to move along for the weld application.

APA, Harvard, Vancouver, ISO, and other styles
17

FERNANDEZ, MANUEL EDUARDO LOAIZA. "MULTIPLE CAMERA CALIBRATION BASED ON INVARIANT PATTERN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14885@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
O processo de calibração de câmeras é uma etapa importante na instalação dos sistemas de rastreamento óptico. Da qualidade da calibração deriva o funcionamento correto e preciso do sistema de rastreamento. Diversos métodos de calibração têm sido propostos na literatura em conjunto com o uso de artefatos sintéticos definidos como padrões de calibração. Esses padrões, de forma e tamanho conhecidos, permitem a aquisição de pontos de referência que são utilizados para a determinação dos parâmetros das câmeras. Para minimizar erros, esta aquisição deve ser feita em todo o espaço de rastreamento. A fácil identificação dos pontos de referência torna o processo de aquisição eficiente. A quantidade e a qualidade das relações geométricas das feições do padrão influenciam diretamente na precisão dos parâmetros de calibração obtidos. É nesse contexto que esta tese se encaixa, propondo um novo método para múltipla calibração de câmeras, que é eficiente e produz resultados tão ou mais precisos que os métodos atualmente disponíveis na literatura. Nosso método também propõe um novo tipo de padrão de calibração que torna a tarefa de captura e reconhecimento de pontos de calibração mais robusta e eficiente. Deste padrão também derivam relações que aumentam a precisão do rastreamento. Nesta tese o processo de calibração de múltiplas câmeras é revisitado e estruturado de forma a permitir uma comparação das principais propostas da literatura com o método proposto. Esta estruturação também dá suporte a uma implementação flexível que permite a reprodução numérica de diferentes propostas. Finalmente, este trabalho apresenta resultados numéricos que permitem tirar algumas conclusões.
The calibration of multiple cameras is an important step in the installation of optical tracking systems. The accuracy of a tracking system is directly related to the quality of the calibration process. Several calibration methods have been proposed in the literature in conjunction with the use of artifacts, called calibration patterns. These patterns, with shape and size known, allow the capture of reference points to compute camera parameters. To yield good results these points must be uniformly distributed over the tracking area. The determination of the reference points in the image is an expensive process prone to errors. The use of a good calibration pattern can reduce these problems. This thesis proposes a new multiple camera calibration method that is efficient and yields better results than previously proposed methods available in the literature. Our method also proposes the use of a new simple calibration pattern based on perspective invariant properties and useful geometric properties. This pattern yields robust reference point identification and more precise tracking. This thesis also revisits the multiple calibration process and suggests a framework to compare the existing methods including the one proposed here. This framework is used to produce a flexible implementation that allows a numerical evaluation that demonstrates the benefits of the proposed method. Finally the thesis presents some conclusions and suggestions for further work.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Yan, and 李燕. "3D reconstruction and camera calibration from circular." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://ndltd.ncl.edu.tw/handle/41449873847192900368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Szczepanski, Michał. "Online stereo camera calibration on embedded systems." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC095.

Full text
Abstract:
Cette thèse décrit une approche de calibration en ligne des caméras stéréo pour des systèmes embarqués. Le manuscrit introduit une nouvelle mesure de la qualité du service de cette fonctionnalité dans les systèmes cyber physiques. Ainsi, le suivi et le calcul des paramètres internes du capteur (requis pour de nombreuses tâches de vision par ordinateur) est réalisé dynamiquement. La méthode permet à la fois d'augmenter la sécurité et d'améliorer les performances des systèmes utilisant des caméras stéréo. Elle prolonge la durée de vie des appareils grâce à cette procédure d'auto-réparation, et peut accroître l'autonomie. Des systèmes tels que les robots mobiles ou les lunettes intelligentes en particulier peuvent directement bénéficier de cette technique.La caméra stéréo est un capteur capable de fournir un large spectre de données. Au préalable, le capteur doit être calibré extrinsèquement, c'est à dire que les positions relatives des deux caméras doivent être déterminées. Cependant, cette calibration extrinsèque peut varier au cours du temps à cause d'interactions avec l'environnement extérieur par exemple (chocs, vibrations...). Ainsi, une opération de recalibration permet de corriger ces effets. En effet, des données mal comprises peuvent entraîner des erreurs et le mauvais fonctionnement des applications. Afin de contrer un tel scénario, le système doit disposer d'un mécanisme interne, la qualité des services, pour décider si les paramètres actuels sont corrects et/ou en calculer des nouveaux, si nécessaire. L'approche proposée dans cette thèse est une méthode d'auto-calibration basée sur l'utilisation de données issues uniquement de la scène observée (sans modèles contrôlés). Tout d'abord, nous considérons la calibration comme un processus système s'exécutant en arrière-plan devant fonctionner en continu et en temps réel. Cette calibration interne n'est pas la tâche principale du système, mais la procédure sur laquelle s'appuient les applications de haut niveau. Pour cette raison, les contraintes systèmes limitent considérablement l'algorithme en termes de complexité, de mémoire et de temps. La méthode de calibration proposée nécessite peu de ressources et utilise des données standards provenant d'applications de vision par ordinateur, de sorte qu'elle est masquée à l'intérieur du pipeline applicatif. Dans ce manuscrit, de nombreuses discussions sont consacrées aux sujets liés à la calibration de caméras en ligne pour des systèmes embarqués, tels que des problématiques sur l'extraction de points d'intérêts robustes et au calcul du facteur d'échelle, les aspects d’implémentation matérielle, les applications de haut niveau nécessitant cette approche, etc.Enfin, cette thèse décrit et explique une méthodologie pour la constitution d'un nouveau type d'ensemble de données, permettant de représenter un changement de position d'une caméra,pour valider l’approche. Le manuscrit explique également les différents environnements de travail utilisés dans la réalisation des jeux de données et la procédure de calibration de la caméra. De plus, il présente un premier prototype de casque intelligent, sur lequel s’exécute dynamiquement le service d’auto-calibration proposé. Enfin, une caractérisation en temps réel sur un processeur embarqué ARM Cortex A7 est réalisée
This thesis describes an approach for online calibration of stereo cameras on embeddedsystems. It introduces a new functionality for cyber physical systems by measuring the qualityof service of the calibration. Thus, the manuscript proposes a dynamic monitoring andcalculation of the internal sensor parameters required for many computer vision tasks. Themethod improves both security and system efficiency using stereo cameras. It prolongs the lifeof the devices thanks to this self-repair capability, which increases autonomy. Systems such asmobile robots or smart glasses in particular can directly benefit from this technique.The stereo camera is a sensor capable of providing a wide spectrum of data. Beforehand, thissensor must be extrinsically calibrated, i.e. the relative positions of the two cameras must bedetermined.. However, camera extrinsic calibration can change over time due to interactionswith the external environment for example (shocks, vibrations...). Thus, a recalibrationoperation allow correcting these effects. Indeed, misunderstood data can lead to errors andmalfunction of applications. In order to counter such a scenario, the system must have aninternal mechanism, a quality of service, to decide whether the current parameters are correctand/or calculate new ones, if necessary.The approach proposed in this thesis is a self-calibration method based on the use of data coming only from the observed scene, without controlled models. First of all, we consider calibration as a system process running in the background and having to run continuously in real time. This internal calibration is not the main task of the system, but the procedure on which high-level applications rely. For this reason, system constraints severely limit the algorithm in terms of complexity, memory and time. The proposed calibration method requires few resources and uses standard data from computer vision applications, so it is hidden within the application pipeline. In this manuscript, we present many discussions to topics related to the online stereocalibration on embedded systems, such as problems on the extraction of robust points ofinterest, the calculation of the scale factor, hardware implementation aspects, high-levelapplications requiring this approach, etc. Finally, this thesis describes and explains amethodology for the building of a new type of dataset to represent the change of the cameraposition to validate the approach. The manuscript also explains the different workenvironments used in the realization of the datasets and the camera calibration procedure. Inaddition, it presents the first prototype of a smart helmet, on which the proposed self-calibration service is dynamically executed. Finally, this thesis characterizes the real-timecalibration on an embedded ARM Cortex A7 processor
APA, Harvard, Vancouver, ISO, and other styles
20

Van, Hook Richard L. "A Comparison of Monocular Camera Calibration Techniques." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1400063414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Tillapaugh, Bennet Howd. "Indirect camera calibration in a medical environment /." Online version of thesis, 2008. http://hdl.handle.net/1850/7900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhai, Menghua. "Deep Probabilistic Models for Camera Geo-Calibration." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/74.

Full text
Abstract:
The ultimate goal of image understanding is to transfer visual images into numerical or symbolic descriptions of the scene that are helpful for decision making. Knowing when, where, and in which direction a picture was taken, the task of geo-calibration makes it possible to use imagery to understand the world and how it changes in time. Current models for geo-calibration are mostly deterministic, which in many cases fails to model the inherent uncertainties when the image content is ambiguous. Furthermore, without a proper modeling of the uncertainty, subsequent processing can yield overly confident predictions. To address these limitations, we propose a probabilistic model for camera geo-calibration using deep neural networks. While our primary contribution is geo-calibration, we also show that learning to geo-calibrate a camera allows us to implicitly learn to understand the content of the scene.
APA, Harvard, Vancouver, ISO, and other styles
23

Grans, Sebastian. "Simplifying stereo camera calibration using M-arrays." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-397003.

Full text
Abstract:
Digitization of objects in three dimensions, also known as 3D scanning, is becoming increasingly prevalent in all types of fields. Ranging from manufacturing, medicine, and even cultural heritage preservation. Many 3D scanning methods rely on cameras to recover depth information and the accuracy of the resulting 3D scan is therefore dependent on their calibration. The calibration process is, for the end-user, relatively cumbersome due to how the popular computer vision libraries have chosen to implement calibration target detection. In this thesis, we have therefore focused on developing and implementing a new type of calibration target to simplify the calibration process for the end-user. The calibration board that was designed is based on colored circular calibration points which form an M-array, where each local neighborhood uniquely encodes the coordinates. This allows the board to be decoded despite being occluded or partially out of frame. This contrasts the calibration board implemented in most software libraries and toolboxes which consists of a standard black and white checkered calibration board that does not allow partial views. Our board was assessed by calibrating single cameras and high FOV cameras and comparing it to regular calibration. A structured light 3D scanning stereo setup was also calibrated which was used to scan and reconstruct calibrated artifacts. The reconstructions could then be compared to the real artifacts. In all experiments, we have been able to provide similar results to the checkerboard, while also being subjectively easier to use due to the support for partial observation. We have also discussed potential methods to further improve our target in terms of accuracy and ease of use.
APA, Harvard, Vancouver, ISO, and other styles
24

Bergamasco, Filippo <1985&gt. "High-accuracy camera calibration and scene acquisition." Doctoral thesis, Università Ca' Foscari Venezia, 2014. http://hdl.handle.net/10579/5602.

Full text
Abstract:
In this thesis we present some interesting new approaches in the field of camera calibration and high-accuracy scene acquisition. The first part is devoted to the camera calibration problem exploiting targets composed by circular features. Specifically, we start by improving some previous work on a family of fiducial markers which are leveraged to be used as calibration targets to recover both extrinsic and intrinsic camera parameters. Then, by using the same geometric concepts developed for the markers, we present a method to calibrate a pinhole camera by observing a set of generic coplanar circles. In the second part we move our attention to unconstrained (non-pinhole) camera models. We begin asking ourselves if such models can be effectively applied also to quasi-central cameras and present a powerful calibration technique that exploit active targets to estimate the huge number of parameters required. Then, we apply a similar method to calibrate a structured-light projector during the range-map acquisition process to improve both the accuracy and coverage. Finally, we propose a way to lower the complexity of a complete unconstrained model toward a pinhole configuration but allowing a complete generic distortion map. In the last part we study two different scene acquisition problems, namely industry-grade 3D geometry measurements and dichromatic model parameters recovery from multi-spectral images. In the former, we propose a novel visual-inspection device for the dimensional assessment of metallic pipe intakes. In the latter, we formulate a state-of-the-art optimization approach for the simultaneous recovery of the optical flow and the dichromatic coefficients of a scene by analyzing two subsequent frames.
APA, Harvard, Vancouver, ISO, and other styles
25

Jansson, Sebastian. "On Vergence Calibration of a Stereo Camera System." Thesis, Linköpings universitet, Institutionen för systemteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84770.

Full text
Abstract:
Modern cars can be bought with camera systems that watch the road ahead. They can be used for many purposes, one use is to alert the driver when other cars are in the path of collision. If the warning system is to be reliable, the input data must be correct. One input can be the depth image from a stereo camera system; one reason for the depth image to be wrong is if the vergence angle between the cameras are erroneously calibrated. Even if the calibration is accurate from production there's a risk that the vergence changes due to temperature variations when the car is started. This thesis proposes one solution for short-time live calibration of a stereo camera system; where the speedometer data available on the CAN-bus is used as reference. The motion of the car is estimated using visual odometry, which will be affected by any errors in the calibration. The vergence angle is then altered virtually until the estimated speed is equal to the reference speed. The method is analyzed for noise and tested on real data. It is shown that detection of calibration errors down to 0.01 degrees is possible under certain circumstances using the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
26

Beck, Johannes [Verfasser], and C. [Akademischer Betreuer] Stiller. "Camera Calibration with Non-Central Local Camera Models / Johannes Beck ; Betreuer: C. Stiller." Karlsruhe : KIT-Bibliothek, 2021. http://d-nb.info/1231361492/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ozuysal, Mustafa. "Manual And Auto Calibration Of Stereo Camera Systems." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605296/index.pdf.

Full text
Abstract:
To make three dimensional measurements using a stereo camera system, the intrinsic and extrinsic calibration of the system should be obtained. Furthermore, to allow zooming, intrinsic parameters should be re-estimated using only scene constraints. In this study both manual and autocalibration algorithms are implemented and tested. The implemented manual calibration system is used to calculate the parameters of the calibration with the help of a planar calibration object. The method is tested on different internal calibration settings and results of 3D measurements using the obtained calibration is presented. Two autocalibration methods have been implemented. The first one requires a general motion while the second method requires a pure rotation of the cameras. The autocalibration methods require point matches between images. To achieve a fully automated process, robust algorithms for point matching have been implemented. For the case of general motion the fundamental matrix relation is used in the matching algorithm. When there is only rotation between views, the homography relation is used. The results of variations on the autocalibration methods are also presented. The result of the manual calibration has been found to be very reliable. The results of the first autocalibration method are not accurate enough but it has been shown that the calibration from rotating cameras performs precise enough if rotation between images is sufficiently large.
APA, Harvard, Vancouver, ISO, and other styles
28

Sjöholm, Daniel. "Calibration using a general homogeneous depth camera model." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204614.

Full text
Abstract:
Being able to accurately measure distances in depth images is important for accurately reconstructing objects. But the measurement of depth is a noisy process and depth sensors could use additional correction even after factory calibration. We regard the pair of depth sensor and image sensor to be one single unit, returning complete 3D information. The 3D information is combined by relying on the more accurate image sensor for everything except the depth measurement. We present a new linear method of correcting depth distortion, using an empirical model based around the constraint of only modifying depth data, while keeping planes planar. The depth distortion model is implemented and tested on the Intel RealSense SR300 camera. The results show that the model is viable and generally decreases depth measurement errors after calibrating, with an average improvement in the 50 percent range on the tested data sets.
Att noggrant kunna mäta avstånd i djupbilder är viktigt för att kunna göra bra rekonstruktioner av objekt. Men denna mätprocess är brusig och dagens djupsensorer tjänar på ytterligare korrektion efter fabrikskalibrering. Vi betraktar paret av en djupsensor och en bildsensor som en enda enhet som returnerar komplett 3D information. 3D informationen byggs upp från de två sensorerna genom att lita på den mer precisa bildsensorn för allt förutom djupmätningen. Vi presenterar en ny linjär metod för att korrigera djupdistorsion med hjälp av en empirisk modell, baserad kring att enbart förändra djupdatan medan plana ytor behålls plana. Djupdistortionsmodellen implementerades och testades på kameratypen Intel RealSense SR300. Resultaten visar att modellen fungerar och i regel minskar mätfelet i djupled efter kalibrering, med en genomsnittlig förbättring kring 50 procent för de testade dataseten.
APA, Harvard, Vancouver, ISO, and other styles
29

Du, Toit Nicolaas Serdyn. "Calibration of UV-sensitive camera for corona detection." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/1016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Valois, Jean-Sébastien. "Monocular camera calibration assessment for mid-range photogrammetry." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33348.

Full text
Abstract:
The CCD (Charge Coupled Device) image formation theory is at the foundation of 3D vision systems. Ideally, a five-element block diagram can model this process. The first block represents the nonlinear distortions caused by camera lenses. The second and third elements gather the low-pass filter effects due to lens aberrations and CCD phenomenon. A fourth block illustrates the quantization effects induced by a series of discrete photosensitive elements on the CCD and by the A/D conversion for analog cameras. The last block represents the addition of random noise on the discrete signal. Because step-like luminance transitions undergoing lens distortion remain step-like, it is possible to precisely correct for the distortions after the edge localization. The efficient distortion correction process is exactly where the camera lens calibration challenge resides. The calibration exercise also seeks the intrinsic and extrinsic camera parameters, i.e. the information that relates to the camera optics and the information that describes the location and orientation of the camera in 3D space. This thesis presents a review and evaluation of several methods designed for optimal accuracy on the parameters evaluation. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
31

COSTA, HUMBERTO SILVINO ALVES DA. "CALIBRATION OF A THERMOGRAPHIC CAMERA FOR PRODUCTION PLANNING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11021@1.

Full text
Abstract:
INSTITUTO NACIONAL DE METROLOGIA, QUALIDADE E TECNOLOGIA
LIGHT
O aumento da temperatura de equipamentos de produção de energia elétrica é um indicativo de seu mau funcionamento ou da necessidade de uma manutenção preventiva antes que limites críticos sejam alcançados. Uma técnica utilizada para o diagnóstico é a interpretação do sinal infravermelho captado por uma câmera que fornece uma imagem do campo visual em questão, normalmente conhecida por termovisor. Neste trabalho foi desenvolvida uma metodologia para interpretar o seu sinal tendo em vista o planejamento de manutenção. Inicialmente, foi projetado um dispositivo para calibração de um termovisor na PUC-Rio. Ele consta de um bloco cilíndrico de latão, imerso em um banho de temperatura controlada. A seguir, o termovisor foi calibrado no corpo negro do INMETRO. Através da comparação entre os valores medidos pelo termovisor na PUC-Rio e no INMETRO, a emissividade da superfície pode ser determinada, e ajustada no instrumento para medição de temperatura com superfícies semelhantes. Com o termovisor calibrado, foi feita uma análise do impacto da incerteza de medição de temperatura sobre os procedimentos atualmente empregados pela concessionária de energia elétrica, LIGHT ENERGIA S.A., de modo a otimizar os procedimentos de manutenção de seus equipamentos.
The operating temperature increase of electric energy production equipments is a sign of poor performance or the need of maintenance before critical limits be attained. As a diagnostic tool, the interpretation of the infrared signal, as received by a camera that registers the image of a target, is often used and referred as a thermographic camera. In this work, a methodology was developed to interpret the infrared signal from a camera, aiming a maintenance planning. Initially, a device was designed to calibrate the thermographic camera at PUC-Rio. It consists of a cylindrical brass block, placed inside a controlled temperature bath, having its upper surface painted black and placed about 3 mm above the liquid surface of the bath. Holes were drilled radially, slightly bellow the block upper surface, so that its temperature could be measured by inserted thermocouples. Next, the instrument was calibrated with a black body at INMETRO. The surface emissivity was calculated as a result of the comparison between the calibration results in PUC-Rio and INMETRO. After calibration, the impact of the uncertainty of several parameters in temperature measurement was calculated, following the procedures that are presently adopted by the electric energy utility company LIGHT ENERGIA S.A., so that to optimize the maintenance procedure of equipments.
APA, Harvard, Vancouver, ISO, and other styles
32

Liebowitz, David. "Camera calibration and reconstruction of geometry from images." Thesis, University of Oxford, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.393408.

Full text
Abstract:
This thesis addresses the issues of combining camera calibration constraints from various sources and reconstructing scene geometry from single and multiple views. A geometric approach is taken, associating both structure recovery and calibration with geometric entities. Three sources of calibration constraints are considered: scene constraints, such as the parallelism and orthogonality of lines, constraints from partial knowledge of camera parameters, and constraints derived from the motion between views. First, methods of rectifying the projective distortion in an imaged plane are examined. Metric rectification constraints are developed by constraining the imaged plane circular points. The internal camera parameters are associated with the absolute conic. It is shown how imaged plane circular points constrain the image of the absolute conic, and are constrained by a known absolute conic in return. A method of using planes with known metric structure as a calibration object is developed. Next, calibration and reconstruction from single views is addressed. A well known configuration of the vanishing points of three orthogonal directions and knowledge that the camera has square pixels is expressed geometrically and subjected to degeneracy and error analysis. The square pixel constraint is shown to be geometrically equivalent to treating the image plane as a metric scene plane. Use of the vanishing point configuration is extended to two views, where three vanishing points and known epipolar geometry define a three dimensional affine reconstruction. Calibration and metric reconstruction follows similarly to the single view case, with the addition of auto-calibration constraints from the motion between views. The auto-calibration constraints are derived from the geometric representation of the square pixel constraints, by transferring the image plane circular points between views. Degenerate cases for constraints from square pixels and cameras having identical internal parameters are described. Finally, a constraint on the metric rectification of an affine reconstruction from the relative lengths of a pair of 3D line segments is developed. The constraint is applied to human motion capture from a pair of affine cameras.
APA, Harvard, Vancouver, ISO, and other styles
33

Tummala, Gopi Krishna. "Automatic Camera Calibration Techniques for Collaborative Vehicular Applications." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1545874031461163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Henrichsen, Arne. "3D reconstruction and camera calibration from 2D images." Master's thesis, University of Cape Town, 2000. http://hdl.handle.net/11427/9725.

Full text
Abstract:
Includes bibliographical references.
A 3D reconstruction technique from stereo images is presented that needs minimal intervention from the user. The reconstruction problem consists of three steps, each of which is equivalent to the estimation of a specific geometry group. The first step is the estimation of the epipolar geometry that exists between the stereo image pair, a process involving feature matching in both images. The second step estimates the affine geometry, a process of finding a special plane in projective space by means of vanishing points. Camera calibration forms part of the third step in obtaining the metric geometry, from which it is possible to obtain a 3D model of the scene. The advantage of this system is that the stereo images do not need to be calibrated in order to obtain a reconstruction. Results for both the camera calibration and reconstruction are presented to verify that it is possible to obtain a 3D model directly from features in the images.
APA, Harvard, Vancouver, ISO, and other styles
35

Wadell, Elwood Talmadge. "An analysis of camera calibration for voxel coloring." Lexington, Ky. : [University of Kentucky Libraries], 2002. http://lib.uky.edu/ETD/ukycosc2002t00068/ETThesis.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Kentucky, 2002.
Title from document title page. Document formatted into pages; contains vi, 45 p. : ill. Includes abstract. Includes bibliographical references (p. 40-44).
APA, Harvard, Vancouver, ISO, and other styles
36

Hammarlund, Emil. "Target-less and targeted multi-camera color calibration." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33876.

Full text
Abstract:
Multiple camera arrays are beginning to see more widespread use in a variety of different applications, be it for research purposes or for enhancing the view- ing experience in entertainment. However, when using multiple cameras the images produced are often not color consistent due to a variety of different rea- sons such as differences in lighting, chip-level differences e.t.c. To address this there exists a multitude of different color calibration algorithms. This paper ex- amines two different color calibration algorithms one targeted and one target- less. Both methods were implemented in Python using the libraries OpenCV, Matplotlib, and NumPy. Once the algorithms had been implemented, they were evaluated based on two metrics; color range homogeneity and color ac- curacy to target values. The targeted color calibration algorithm was more ef- fective improving the color accuracy to ground truth then the target-less color calibration algorithm, but the target-less algorithm deteriorated the color range homogeneity less than the targeted color calibration algorithm. After both methods where tested, an improvement of the targeted color calibration al- gorithm was attempted. The resulting images were then evaluated based on the same two criteria as before, the modified version of the targeted color cal- ibration algorithm performed better than the original targeted algorithm with respect to color range homogeneity while still maintaining a similar level of performance with respect to color accuracy to ground truth as before. Further- more, when the color range homogeneity of the modified targeted algorithm was compared with the color range homogeneity of the target-less algorithm. The performance of the modified targeted algorithm performed similarly to the target-less algorithm. Based on these results, it was concluded that the targeted color calibration was superior to the target-less algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Esquivel, Sandro [Verfasser]. "Eye-to-Eye Calibration - Extrinsic Calibration of Multi-Camera Systems Using Hand-Eye Calibration Methods / Sandro Esquivel." Kiel : Universitätsbibliothek Kiel, 2015. http://d-nb.info/1073150615/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Söderroos, Anna. "Fisheye Camera Calibration and Image Stitching for Automotive Applications." Thesis, Linköpings universitet, Datorseende, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121399.

Full text
Abstract:
Integrated camera systems for increasing safety and maneuverability are becoming increasingly common for heavy vehicles. One problem with heavy vehicles today is that there are blind spots where the driver has no or very little view. There is a great demand on increasing the safety and helping the driver to get a better view of his surroundings. This can be achieved by a sophisticated camera system, using cameras with wide field of view, that could cover dangerous blind spots. This master thesis aims to investigate and develop a prototype solution for a camera system consisting of two fisheye cameras. The solution covers both hardware choices and software development including camera calibration and image stitching. Two different fisheye camera calibration toolboxes are compared and their results discussed, with the aim to find the most suitable for this application. The result from the two toolboxes differ in performance, and the result from only one of the toolboxes is sufficient for image stitching.
APA, Harvard, Vancouver, ISO, and other styles
39

Axholt, Magnus. "Pinhole Camera Calibration in the Presence of Human Noise." Doctoral thesis, Linköpings universitet, Medie- och Informationsteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-72055.

Full text
Abstract:
The research work presented in this thesis is concerned with the analysis of the human body as a calibration platform for estimation of a pinhole camera model used in Augmented Reality environments mediated through Optical See-Through Head-Mounted Display. Since the quality of the calibration ultimately depends on a subject’s ability to construct visual alignments, the research effort is initially centered around user studies investigating human-induced noise, such as postural sway and head aiming precision. Knowledge about subject behavior is then applied to a sensitivity analysis in which simulations are used to determine the impact of user noise on camera parameter estimation. Quantitative evaluation of the calibration procedure is challenging since the current state of the technology does not permit access to the user’s view and measurements in the image plane as seen by the user. In an attempt to circumvent this problem, researchers have previously placed a camera in the eye socket of a mannequin, and performed both calibration and evaluation using the auxiliary signal from the camera. However, such a method does not reflect the impact of human noise during the calibration stage, and the calibration is not transferable to a human as the eyepoint of the mannequin and the intended user may not coincide. The experiments performed in this thesis use human subjects for all stages of calibration and evaluation. Moreover, some of the measurable camera parameters are verified with an external reference, addressing not only calibration precision, but also accuracy.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Jinlei. "Camera calibration for a three-dimensional range finding system." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/958780.

Full text
Abstract:
The purpose of this project is to develop the procedures to perform the camera calibration in a three dimension range finding system. The goal is to have a system that will provide reasonably accurate range data which can be used in further three-dimensional computer vision research such as edge detection, surface recovery and object recognition. In this project, an active lighting, optical, triangulation based range finding system has been developed. The software system is designed in object oriented technology and implemented using the C++ programming language. The overall performance of the system is investigated and the system has achieved 0.5 mm (or 4%) accuracy. A review of three range data acquisition techniques is given. Based on the analysis to the current system, suggestions to future improvement are also provided.
Department of Physics and Astronomy
APA, Harvard, Vancouver, ISO, and other styles
41

Ižo, Tomáš 1979. "Simultaneous camera calibration and pose estimation from multiple views." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Carlsen, Tim, André Ehrlich, and Manfred Wendisch. "Characterization and calibration of a Full Stokes polarization camera." Universität Leipzig, 2015. https://ul.qucosa.de/id/qucosa%3A16652.

Full text
Abstract:
Initially unpolarized solar radiation is polarized in the atmosphere due to scattering processes at molecules and aerosols. Therefore, the measurement of the polarization state of solar radiation is of vital importance in remote sensing. A SALSA Full Stokes polarization camera measuring the complete Stokes vectors in real time is characterized within this work. The main focus lies on the radiometric calibration as well as the determination and validation of the calibration matrix based on a Data Reduction method. One main issue is the temporal instability of the calibration matrix, which gives rise to the need of a thorough calibration process. In accordance with theoretical expectations and model simulations, the SALSA Full Stokes polarization camera provides reliable measurement results under the condition of Rayleigh scattering.
Die beim Eintritt in die Atmosphäre unpolarisierte solare Strahlung wird durch Streuprozesse an Molekülen oder Aerosolpartikeln polarisiert. Die Messung des Polarisationszustandes der solaren Strahlung spielt deshalb in der Fernerkundung eine wichtige Rolle. Die vorliegende Arbeit charakterisiert eine SALSA Full Stokes Polarisationskamera, die den kompletten Stokes-Vektor in Echtzeit misst. Das Hauptaugenmerk liegt dabei auf der radiometrischen Kalibrierung sowie der Bestimmung und Validierung der Kalibrationsmatrix über die Methode der Datenreduktion. Die zeitliche Instabilität der Kalibrationsmatrix stellt ein großes Problem dar und stellt Anforderungen an den Umfang der Kalibrierung. Mit der SALSA Full Stokes Polarisationskamera sind zuverlässige Messungen unter einer rayleighstreuenden Atmosphäre möglich, die in Übereinstimmung mit den theoretischen Erwartungen und Modellsimulationen stehen.
APA, Harvard, Vancouver, ISO, and other styles
43

Ali-Bey, Mohamed. "Contribution à la spécification et à la calibration des caméras relief." Thesis, Reims, 2011. http://www.theses.fr/2011REIMS022/document.

Full text
Abstract:
Les travaux proposés dans cette thèse s’inscrivent dans le cadre des projets ANR-Cam-Relief et du CPER-CREATIS soutenus par l’Agence Nationale de la Recherche, la région Champagne-Ardenne et le FEDER. Ces études s'inscrivent également dans le cadre d’une collaboration avec la société 3DTV-Solutions et deux groupes du CReSTIC (AUTO et SIC). L’objectif de ce projet est ,entre autre, de concevoir par analogie aux systèmes 2D grands publics actuels, des systèmes de prise de vue 3D permettant de restituer sur écrans reliefs (auto-stéréoscopiques) visibles sans lunettes, des images 3D de qualité. Notre intérêt s’est porté particulièrement sur les systèmes de prise de vue à configuration parallèle et décentrée. Les travaux de recherche menés dans cette thèse sont motivés par l’incapacité des configurations statiques de ces systèmes de prise de vue de capturer correctement des scènes dynamiques réelles pour une restitution autostéréoscopique correcte. Pour surmonter cet obstacle, un schéma d’adaptation de laconfiguration géométrique du système de prise de vue est proposé. Afin de déterminer les paramètres devant être concernés par cette adaptation, une étude de l’effet de la constance de chaque paramètre sur la qualité du rendu relief est menée. Les répercussions des contraintes dynamiques et mécaniques sur le relief restitué sont ensuite examinées. La précision de positionnement des paramètres structurels est abordée à travers la proposition de deux méthodes d’évaluation de la qualité du rendu relief, pour déterminer les seuils d’erreur de positionnement des paramètres structurels du système de prise de vue. Enfin, le problème de la calibration est abordée, où l’on propose une méthode basée sur la méthode de transformation linéaire directe DLT, et des perspectives sont envisagées pour l’asservissement de ces systèmes de prise de vue par asservissement classique ou par asservissement visuel
The works proposed in this thesis are part of the projects ANR-Cam-Relief and CPER-CREATIS supported by the National Agency of Research, the Champagne-Ardenne region and the FEDER.These studies also join within the framework of a collaboration with the 3DTV-Solutions society and two groups of the CReSTIC (AUTO and SIC).The objective of this project is, among others, to design by analogy to the current 2D popularized systems, 3D shooting systems allowing to display on 3D screens (auto-stereoscopic) visible without glasses, 3D quality images. Our interest has focused particularly on shooting systems with parallel and decentred configuration. There search works led in this thesis are motivated by the inability of the static configurations of these shooting systems to capture correctly real dynamic scenes for a correct auto-stereoscopic endering. To overcome this drawback, an adaptation scheme of the geometrical configuration of the shooting system is proposed. To determine the parameters to be affected by this adaptation,the effect of the constancy of each parameter on the rendering quality is studied. The repercussions of the dynamic and mechanical constraints on the 3D rendering are then examined. Positioning accuracy of the structural parameters is approached through two methods proposed for the rendering quality assessment, to determine the thresholds of positioning error of the structural parameters of the shooting system. Finally, the problem of calibration is discussed where we propose an approach based on the DLT method, and perspectives are envisaged to automatic control of these shooting systems by classical approaches or by visual servoing
APA, Harvard, Vancouver, ISO, and other styles
44

Knight, Joss G. H. "Towards fully autonomous visual navigation." Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hayman, Eric. "The use of zoom within active vision." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nagdev, Alok. "Georeferencing digital camera images using internal camera model." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Andersson, Robert. "A calibration method for laser-triangulating 3D cameras." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15735.

Full text
Abstract:

A laser-triangulating range camera uses a laser plane to light an object. If the position of the laser relative to the camera as well as certrain properties of the camera is known, it is possible to calculate the coordinates for all points along the profile of the object. If either the object or the camera and laser has a known motion, it is possible to combine several measurements to get a three-dimensional view of the object.

Camera calibration is the process of finding the properties of the camera and enough information about the setup so that the desired coordinates can be calculated. Several methods for camera calibration exist, but this thesis proposes a new method that has the advantages that the objects needed are relatively inexpensive and that only objects in the laser plane need to be observed. Each part of the method is given a thorough description. Several mathematical derivations have also been added as appendices for completeness.

The proposed method is tested using both synthetic and real data. The results show that the method is suitable even when high accuracy is needed. A few suggestions are also made about how the method can be improved further.

APA, Harvard, Vancouver, ISO, and other styles
48

Waddell, Elwood Talmadge Jr. "An Analysis of Camera Calibration for Voxel Coloring Including the Effect of Calibration on Voxelization Errors." UKnowledge, 2002. http://uknowledge.uky.edu/gradschool_theses/220.

Full text
Abstract:
This thesis characterizes the problem of relative camera calibration in the context of three-dimensional volumetric reconstruction. The general effects of camera calibration errors on different parameters of the projection matrix are well understood. In addition, calibration error and Euclidean world errors for a single camera can be related via the inverse perspective projection. However, there has been little analysis of camera calibration for a large number of views and how those errors directly influence the accuracy of recovered three-dimensional models. A specific analysis of how camera calibration error is propagated to reconstruction errors using traditional voxel coloring algorithms is discussed. A review of the Voxel coloring algorithm is included and the general methods applied in the coloring algorithm are related to camera error. In addition, a specific, but common, experimental setup used to acquire real-world objects through voxel coloring is introduced. Methods for relative calibration for this specific setup are discussed as well as a method to measure calibration error. An analysis of effect of these errors on voxel coloring is presented, as well as a discussion concerning the effects of the resulting world-space error.
APA, Harvard, Vancouver, ISO, and other styles
49

Isaksson, Jakob, and Lucas Magnusson. "Camera pose estimation with moving Aruco-board. : Retrieving camera pose in a stereo camera tolling system application." Thesis, Tekniska Högskolan, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-51076.

Full text
Abstract:
Stereo camera systems can be utilized for different applications such as position estimation,distance measuring, and 3d modelling. However, this requires the cameras to be calibrated.This paper proposes a traditional calibration solution with Aruco-markers mounted on avehicle to estimate the pose of a stereo camera system in a tolling environment. Our method isbased on Perspective N Point which presumes the intrinsic matrix to be already known. Thegoal is to find each camera’s pose by identifying the marker corners in pixel coordinates aswell as in world coordinates. Our tests show a worst-case error of 21.5 cm and a potential forcentimetre accuracy. It also verifies validity by testing the obtained pose estimation live in thecamera system. The paper concludes that the method has potential for higher accuracy notobtained in our experiment due to several factors. Further work would focus on enlarging themarkers and widening the distance between the markers.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Yan. "3D reconstruction and camera calibration from circular-motion image sequences." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B36365919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography