To see the other types of publications on this topic, follow the link: Camera calibration.

Journal articles on the topic 'Camera calibration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Camera calibration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xia, Dan, De Hua Li, and Sheng Yong Xu. "Efficient and Accurate Camera Calibration Based on Planar Pattern." Advanced Materials Research 204-210 (February 2011): 1258–61. http://dx.doi.org/10.4028/www.scientific.net/amr.204-210.1258.

Full text
Abstract:
We describe an effective method for calibrating cameras by using planar calibration patterns. The calibration pattern control points are localized by Harris detector incorporating the gradient histogram. The accuracy of the calibration control points location consequently improves the accuracy of the camera calibration. Additionally, optimization computation is carried out for increasing the accuracy of camera calibration results. Experiments using real images verified the effectiveness and robustness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Zhe, Zhaozong Meng, Nan Gao, and Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target." Sensors 19, no. 13 (July 8, 2019): 3008. http://dx.doi.org/10.3390/s19133008.

Full text
Abstract:
Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Zhenling, Xu Zhong, Hong Xie, Yongjun Zhou, Yuan Chen, and Jiali Wang. "A Combined Physical and Mathematical Calibration Method for Low-Cost Cameras in the Air and Underwater Environment." Sensors 23, no. 4 (February 11, 2023): 2041. http://dx.doi.org/10.3390/s23042041.

Full text
Abstract:
Low-cost camera calibration is vital in air and underwater photogrammetric applications. However, various lens distortions and the underwater environment influence are difficult to be covered by a universal distortion compensation model, and the residual distortions may still remain after conventional calibration. In this paper, we propose a combined physical and mathematical camera calibration method for low-cost cameras, which can adapt to both in-air and underwater environments. The commonly used physical distortion models are integrated to describe the image distortions. The combination is a high-order polynomial, which can be considered as basis functions to successively approximate the image deformation from the point of view of mathematical approximation. The calibration process is repeated until certain criteria are met and the distortions are reduced to a minimum. At the end, several sets of distortion parameters and stable camera interior orientation (IO) parameters act as the final camera calibration results. The Canon and GoPro in-air calibration experiments show that GoPro owns distortions seven times larger than Canon. Most Canon distortions have been described with the Australis model, while most decentering distortions for GoPro still exist. Using the proposed method, all the Canon and GoPro distortions are decreased to close to 0 after four calibrations. Meanwhile, the stable camera IO parameters are obtained. The GoPro Hero 5 Black underwater calibration indicates that four sets of distortion parameters and stable camera IO parameters are obtained after four calibrations. The camera calibration results show a difference between the underwater environment and air owing to the refractive and asymmetric environment effects. In summary, the proposed method improves the accuracy compared with the conventional method, which could be a flexible way to calibrate low-cost cameras for high accurate in-air and underwater measurement and 3D modeling applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Pak, Alexey, Steffen Reichel, and Jan Burke. "Machine-Learning-Inspired Workflow for Camera Calibration." Sensors 22, no. 18 (September 8, 2022): 6804. http://dx.doi.org/10.3390/s22186804.

Full text
Abstract:
The performance of modern digital cameras approaches physical limits and enables high-precision measurements in optical metrology and in computer vision. All camera-assisted geometrical measurements are fundamentally limited by the quality of camera calibration. Unfortunately, this procedure is often effectively considered a nuisance: calibration data are collected in a non-systematic way and lack quality specifications; imaging models are selected in an ad hoc fashion without proper justification; and calibration results are evaluated, interpreted, and reported inconsistently. We outline an (arguably more) systematic and metrologically sound approach to calibrating cameras and characterizing the calibration outcomes that is inspired by typical machine learning workflows and practical requirements of camera-based measurements. Combining standard calibration tools and the technique of active targets with phase-shifted cosine patterns, we demonstrate that the imaging geometry of a typical industrial camera can be characterized with sub-mm uncertainty up to distances of a few meters even with simple parametric models, while the quality of data and resulting parameters can be known and controlled at all stages.
APA, Harvard, Vancouver, ISO, and other styles
5

Krishnan, Aravindhan K., and Srikanth Saripalli. "Cross-Calibration of RGB and Thermal Cameras with a LIDAR for RGB-Depth-Thermal Mapping." Unmanned Systems 05, no. 02 (April 2017): 59–78. http://dx.doi.org/10.1142/s2301385017500054.

Full text
Abstract:
We present a method for calibrating the extrinsic parameters between a RGB camera, a thermal camera, and a LIDAR. The calibration procedure we use is common to both the RGB and thermal cameras. The extrinsic calibration procedure assumes that the cameras are geometrically calibrated. To aid the geometric calibration of the thermal camera, we use a calibration target made of black-and-white melamine that looks like a checkerboard pattern in the thermal and RGB images. For the extrinsic calibration, we place a circular calibration target in the common field of view of the cameras and the LIDAR and compute the extrinsic parameters by minimizing an objective function that aligns the edges of the circular target in the LIDAR to its corresponding edges in the RGB and thermal images. We illustrate the convexity of the objective function and discuss the convergence of the algorithm. We then identify the various sources of coloring errors (after cross-calibration) as (a) noise in the LIDAR points, (b) error in the intrinsic parameters of the camera, (c) error in the translation parameters between the LIDAR and the camera and (d) error in the rotation parameters between the LIDAR and the camera. We analyze the contribution of these errors with respect to the coloring of a 3D point. We illustrate that these errors are related to the depth of the 3D point considered — with errors (a), (b), and (c) being inversely proportional to the depth, and error (d) being directly proportional to the depth.
APA, Harvard, Vancouver, ISO, and other styles
6

Varnam, Matthew, Mike Burton, Ben Esse, Giuseppe Salerno, Ryunosuke Kazahaya, and Martha Ibarra. "Two Independent Light Dilution Corrections for the SO2 Camera Retrieve Comparable Emission Rates at Masaya Volcano, Nicaragua." Remote Sensing 13, no. 5 (March 3, 2021): 935. http://dx.doi.org/10.3390/rs13050935.

Full text
Abstract:
SO2 cameras are able to measure rapid changes in volcanic emission rate but require accurate calibrations and corrections to convert optical depth images into slant column densities. We conducted a test at Masaya volcano of two SO2 camera calibration approaches, calibration cells and co-located spectrometer, and corrected both calibrations for light dilution, a process caused by light scattering between the plume and camera. We demonstrate an advancement on the image-based correction that allows the retrieval of the scattering efficiency across a 2D area of an SO2 camera image. When appropriately corrected for the dilution, we show that our two calibration approaches produce final calculated emission rates that agree with simultaneously measured traverse flux data and each other but highlight that the observed distribution of gas within the image is different. We demonstrate that traverses and SO2 camera techniques, when used together, generate better plume speed estimates for traverses and improved knowledge of wind direction for the camera, producing more reliable emission rates. We suggest combining traverses and the SO2 camera should be adopted where possible.
APA, Harvard, Vancouver, ISO, and other styles
7

Haugh, Michael J., Michael R. Charest, Patrick W. Ross, Joshua J. Lee, Marilyn B. Schneider, Nathan E. Palmer, and Alan T. Teruya. "Calibration of X-ray imaging devices for accurate intensity measurement." Powder Diffraction 27, no. 2 (June 2012): 79–86. http://dx.doi.org/10.1017/s0885715612000413.

Full text
Abstract:
National Security Technologies (NSTec) has developed calibration procedures for X-ray imaging systems. The X-ray sources that are used for calibration are both diode type and diode/fluorescer combinations. Calibrating the X-ray detectors is a key to accurate calibration of the X-ray sources. Both energy dispersive detectors and photodiodes measuring total flux were used. We have developed calibration techniques for the detectors using radioactive sources that are traceable to the National Institute of Standards and Technology (NIST). The German synchrotron at Physikalische Technische Bundestalt (PTB) was used to calibrate the silicon photodiodes over the energy range from 50 to 60 keV. The measurements on X-ray cameras made using the NSTec X-ray sources included quantum efficiency averaged over all pixels, camera counts per photon per pixel, and response variation across the sensor. The instrumentation required to accomplish the calibrations is described. The X-ray energies ranged from 720 to 22.7 keV. The X-ray sources produce narrow energy bands, allowing us to determine the properties as a function of X-ray energy. The calibrations were done for several types of imaging devices. There were back and front illuminated CCD (charge-coupled device) sensors, and a CID (charge injection device) type camera. The CCD and CID camera types differ significantly in some of their properties that affect the accuracy of the X-ray intensity measurements. All the cameras discussed here are silicon based. The measurements of the quantum efficiency variation with the X-ray energy are compared to the models for the sensor structure. The cameras that are not back-thinned are compared to those that are.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Jonguk, Hyansu Bae, and Suk Gyu Lee. "Image Distortion and Rectification Calibration Algorithms and Validation Technique for a Stereo Camera." Electronics 10, no. 3 (February 1, 2021): 339. http://dx.doi.org/10.3390/electronics10030339.

Full text
Abstract:
This paper focuses on the calibration problem using stereo camera images. Currently, advanced vehicle systems such as smart cars and mobile robots require accurate and reliable vision in order to detect obstacles and special marks around. Such modern vehicles can be equipped with sensors and cameras together or separately. In this study, we propose new methodologies of stereo camera calibration based on the correction of distortion and image rectification. Once the calibration is complete, the validation of the corrections is presented followed by an evaluation of the calibration process. Usually, the validation section is not jointly considered with the calibration in other studies. However, the mass production of cameras widely uses the validation techniques in calibrations owned by manufacturing businesses. Here, we aim to present a single process for the calibration and validation of stereo cameras. The experiment results showed the disparity maps in comparison with another study and proved that the proposed calibration methods can be efficient.
APA, Harvard, Vancouver, ISO, and other styles
9

Xie, Xiaokai, Jungang Yang, Chao Xiao, Tianhao Wu, Ting Liu, and Wei An. "Calibration and Target Signal-to-Noise Ratio Enhancement Method of Infrared Camera Array Based on Refocus." Journal of Physics: Conference Series 2187, no. 1 (February 1, 2022): 012063. http://dx.doi.org/10.1088/1742-6596/2187/1/012063.

Full text
Abstract:
Abstract The accuracy of infrared (IR) camera calibration directly affects the quality of IR imaging results. Many imaging devices have been utilized in infrared dim and small target detection, which is a major research direction in infrared field. Comparing with single-lens cameras, IR camera array can improve the detection capability by adding more imaging units, and is expected to break the limitations of traditional single-lens imaging system in cost, volume and performance. The calibration methods for camera array nowadays are generally applied in visible spectrum instead of IR band. In this paper, we proposed a calibration method based on refocus for IR camera array. Furthermore, the target signal-to-noise ratio enhancement method become effective after calibration of IR camera array. As the camera parameters obtained can be used to correct image distortion and improve the quality of imaging, the calibrating process can support many subsequent tasks such as target detection. Experiments indicate that our method achieves the expected effect.
APA, Harvard, Vancouver, ISO, and other styles
10

Fryskowska, A., M. Kedzierski, A. Grochala, and A. Braula. "CALIBRATION OF LOW COST RGB AND NIR UAV CAMERAS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 817–21. http://dx.doi.org/10.5194/isprs-archives-xli-b1-817-2016.

Full text
Abstract:
Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. <br><br> Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. <br><br> The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.
APA, Harvard, Vancouver, ISO, and other styles
11

Muhovič, Jon, and Janez Perš. "Joint Calibration of a Multimodal Sensor System for Autonomous Vehicles." Sensors 23, no. 12 (June 17, 2023): 5676. http://dx.doi.org/10.3390/s23125676.

Full text
Abstract:
Multimodal sensor systems require precise calibration if they are to be used in the field. Due to the difficulty of obtaining the corresponding features from different modalities, the calibration of such systems is an open problem. We present a systematic approach for calibrating a set of cameras with different modalities (RGB, thermal, polarization, and dual-spectrum near infrared) with regard to a LiDAR sensor using a planar calibration target. Firstly, a method for calibrating a single camera with regard to the LiDAR sensor is proposed. The method is usable with any modality, as long as the calibration pattern is detected. A methodology for establishing a parallax-aware pixel mapping between different camera modalities is then presented. Such a mapping can then be used to transfer annotations, features, and results between highly differing camera modalities to facilitate feature extraction and deep detection and segmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhao, Zijian, and Ying Weng. "A flexible method combining camera calibration and hand–eye calibration." Robotica 31, no. 5 (February 1, 2013): 747–56. http://dx.doi.org/10.1017/s0263574713000040.

Full text
Abstract:
SUMMARYWe consider the conventional techniques of vision robot system calibration where camera parameters and robot hand–eye parameters are computed separately, i.e., first performing camera calibration and then carrying out hand–eye calibration based on the calibrated parameters of cameras. In this paper we propose a joint algorithm that combines the camera calibration and the hand–eye calibration together. The proposed algorithm gives the solutions of the cameras' parameters and the hand–eye parameters simultaneously by using nonlinear optimization. Both simulations and real experiments show the superiority of our algorithm. We also apply our algorithm in the real application of the robot-assisted surgical system, and very good results have been obtained.
APA, Harvard, Vancouver, ISO, and other styles
13

Park, Byung-Seo, Woosuk Kim, Jin-Kyum Kim, Eui Seok Hwang, Dong-Wook Kim, and Young-Ho Seo. "3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview." Sensors 22, no. 3 (January 31, 2022): 1097. http://dx.doi.org/10.3390/s22031097.

Full text
Abstract:
This paper proposes a new technique for performing 3D static-point cloud registration after calibrating a multi-view RGB-D camera using a 3D (dimensional) joint set. Consistent feature points are required to calibrate a multi-view camera, and accurate feature points are necessary to obtain high-accuracy calibration results. In general, a special tool, such as a chessboard, is used to calibrate a multi-view camera. However, this paper uses joints on a human skeleton as feature points for calibrating a multi-view camera to perform calibration efficiently without special tools. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D joint set obtained through pose estimation as feature points. Since human body information captured by the multi-view camera may be incomplete, a joint set predicted based on image information obtained through this may be incomplete. After efficiently integrating a plurality of incomplete joint sets into one joint set, multi-view cameras can be calibrated by using the combined joint set to obtain extrinsic matrices. To increase the accuracy of calibration, multiple joint sets are used for optimization through temporal iteration. We prove through experiments that it is possible to calibrate a multi-view camera using a large number of incomplete joint sets.
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Cong, Haibo Liu, Mengna Jia, and Shengyi Chen. "Review of Calibration Methods for Scheimpflug Camera." Journal of Sensors 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/3901431.

Full text
Abstract:
The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.
APA, Harvard, Vancouver, ISO, and other styles
15

Hallert., Bertil. "Camera Calibration." Photogrammetric Record 3, no. 16 (August 26, 2006): 376. http://dx.doi.org/10.1111/j.1477-9730.1960.tb01300.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Fryskowska, A., M. Kedzierski, A. Grochala, and A. Braula. "CALIBRATION OF LOW COST RGB AND NIR UAV CAMERAS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 817–21. http://dx.doi.org/10.5194/isprsarchives-xli-b1-817-2016.

Full text
Abstract:
Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. &lt;br&gt;&lt;br&gt; Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. &lt;br&gt;&lt;br&gt; The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.
APA, Harvard, Vancouver, ISO, and other styles
17

Yusoff, A. R., M. F. M. Ariff, K. M. Idris, Z. Majid, and A. K. Chong. "CAMERA CALIBRATION ACCURACY AT DIFFERENT UAV FLYING HEIGHTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (February 23, 2017): 595–600. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-595-2017.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5&amp;thinsp;metres in the laboratory, and 15 and 25&amp;thinsp;metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25&amp;thinsp;metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.
APA, Harvard, Vancouver, ISO, and other styles
18

Baek, Seung-Hae, Pathum Rathnayaka, and Soon-Yong Park. "Calibration of a Stereo Radiation Detection Camera Using Planar Homography." Journal of Sensors 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/8928096.

Full text
Abstract:
This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.
APA, Harvard, Vancouver, ISO, and other styles
19

Jasinska, Aleksandra. "Evaluation of the Quality of a Checkerboard Camera Calibration Compared to a Calibration on a Laboratory Test Field." Baltic Surveying 16 (December 22, 2022): 21–28. http://dx.doi.org/10.22616/j.balticsurveying.2022.16.003.

Full text
Abstract:
For photogrammetric works, a fundamental issue is the determination of camera internal orientation parameters (IOP). Without camera calibration, it is difficult to imagine a correct adjustment of the image network. Many industries use non-metric cameras, ranging from automatics and robotics, to heritage inventories, and the increasingly popular social mapping phenomenon uses low-budget cameras. Many different calibration methods exist, but dedicated calibration fields are commonly replaced by fast in-plane calibration with regular patterns. The main goal of this research is to verify the thesis that calibrating cameras on a checkerboard gives worse results in determining IOP than on a laboratory test field which may translate into the resulting model. For the purpose of this study, a special field was constructed, allowing calibration of the instruments on the basis of the network solution by the bundle adjustment. Unlike classical 2D fields, the field is equipped with a cork background providing a good base for matching and automatically detecting measurement marks. Calibration results were compared with calibration performed on a checkerboard implemented in MATLAB Camera Calibration Toolbox. In order to determine IOP in MATLAB, images of the checkerboard must be taken in such a way, that the whole pattern fits into the frame, otherwise toolbox defines the incorrectly coordinate system, which has a bad impact on calibration results. Moreover, the determined parameters have several times larger standard deviations than those determined in the laboratory test field, which confirms the thesis.
APA, Harvard, Vancouver, ISO, and other styles
20

Wueller, Dietmar. "A new dimension in geometric camera calibration." Electronic Imaging 2020, no. 9 (January 26, 2020): 18–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-018.

Full text
Abstract:
There are many test charts and software to determine the intrinsic geometric calibration of a camera including distortion. But all of these setups have a few problems in common. They are limited to finite object distances and require large test charts for calibrations at greater distances combined with powerful and uniform illumination. On production lines the workaround for this problem is often times the use of a relay lens which itself introduces geometric distortions and therefore inaccuracies that need to be compensated for. A solution to overcome these problems and limitations has originally been developed for space applications and has already become a common method for the calibration of satellite cameras. We have now turned the lab setup on an optical bench into a commercially available product that can be used for the calibration of a huge variety of cameras for different applications. This solution is based on a diffractive optical element (DOE) that gets illuminated by a plane wave generated with an expanded laser diode beam. In addition to the conventional methods the proposed one also provides the extrinsic orientation of the camera and therefore allows the adjustment of cameras to each other.
APA, Harvard, Vancouver, ISO, and other styles
21

Gieseler, Oliver, Hubert Roth, and Jürgen Wahrburg. "A novel 4 camera multi-stereo tracking system for application in surgical navigation systems." tm - Technisches Messen 87, no. 7-8 (July 26, 2020): 451–58. http://dx.doi.org/10.1515/teme-2019-0144.

Full text
Abstract:
AbstractIn this paper, we present a novel 4 camera stereo system for application as optical tracking component in navigation systems in computer-assisted surgery. This shall replace a common stereo camera system in several applications. The objective is to provide a tracking component consisting of four single industrial cameras. The system can be built up flexibly in the operating room e. g. at the operating room lamp. The concept is characterized by independent, arbitrary camera mounting poses and demands easy on-site calibration procedures of the camera setup. Following a short introduction describing the environment, motivation and advantages of the new camera system, a simulation of the camera setup and arrangement is depicted in Section 2. From this, we gather important information and parameters for the hardware setup, which is described in Section 3. Section 4 includes the calibration of the cameras. Here, we illustrate the background of camera model and applied calibration procedures, a comparison of calibration results obtained with different calibration programs and a new concept for fast and easy extrinsic calibration.
APA, Harvard, Vancouver, ISO, and other styles
22

Senn, J. A., J. P. Mills, P. E. Miller, C. Walsh, S. Addy, E. Loerke, and M. V. Peppa. "ON-SITE GEOMETRIC CALIBRATION OF THERMAL AND OPTICAL SENSORS FOR UAS PHOTOGRAMMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2020 (August 6, 2020): 355–61. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2020-355-2020.

Full text
Abstract:
Abstract. UAS imagery has become a widely used source of information in geomorphic research. When photogrammetric methods are applied to quantify geomorphic change, camera calibration is essential to ensure accuracy of the image measurements. Insufficient self-calibration based on survey data can induce systematic errors that can cause DEM deformations. The typically low geometric stability of consumer grade sensors necessitates in-situ calibration, as the reliability of a lab based calibration can be affected by transport. In this research a robust on-site workflow is proposed that allows the time-efficient and repeatable calibration of thermal and optical sensors at the same time. A stone building was utilised as calibration object with TLS scans for reference. The approach was applied to calculate eight separate camera calibrations using two sensors (DJI Phantom 4 Pro and Workswell WIRIS pro), two software solutions (Vision Measurement System (VMS) and Agisoft Metashape) and two different subsets of images per sensor. The presented results demonstrate that the approach is suitable to determine camera parameters for pre-calibrating photogrammetric surveys.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Zhaoran, Guochu Chen, and Lisi Tan. "Optimization of stereo calibration parameters for the binocular camera based on improved Beetle Antennae Search algorithm." Journal of Physics: Conference Series 2029, no. 1 (September 1, 2021): 012095. http://dx.doi.org/10.1088/1742-6596/2029/1/012095.

Full text
Abstract:
Abstract For binocular cameras, aiming at the low precision of traditional stereo calibration methods for binocular cameras, a Beetle Antennae Search (BAS) algorithm based on binocular parallel polar line constraint was proposed to improve the calibration accuracy. The internal parameter matrix and distortion parameter matrix of the left and right cameras of the binocular camera were first calculated by using the MATLAB calibration toolbox. Then the stereo calibration parameter matrix of the binocular camera was calculated. Then, using the binocular parallel polar line constraint based on the BAS algorithm establishes the objective function by taking the difference and average values of the vertical coordinates of each checkerboard corner in the left and right images after stereo correction as the objective. The stereo calibration parameters of the binocular camera are further optimized. The results show that the BAS algorithm based on binocular parallel pole constraint is stable, reliable and fast in convergence, which can realize the optimization of stereo calibration parameters of the binocular camera.
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Jiuxiang, Anshuman Razdan, and Joseph A. Zehnder. "Geometric Calibration of Digital Cameras for 3D Cumulus Cloud Measurements." Journal of Atmospheric and Oceanic Technology 26, no. 2 (February 1, 2009): 200–214. http://dx.doi.org/10.1175/2008jtecha1079.1.

Full text
Abstract:
Abstract A technique for calibrating digital cameras for stereo photogrammetry of cumulus clouds is presented. It has been applied to characterize the formation of summer thunderstorms observed during the Cumulus Photogrammetric, In Situ, and Doppler Observations (CuPIDO) project. Starting from gross measurements of locations, orientations of cameras, and landmark surveys, accurate locations and orientations of the cameras are obtained by minimizing a geometric error (GE). Once accurate camera parameters are obtained, 3D positions of cloud-feature points are computed by triangulation. The main contributions of this paper are as follows. First, it is proven that the GE has only one minimum in the neighborhood of the real parameters of a camera. In other words, searching the minimum of the GE enables the authors to find the right camera parameters even if there are significant differences between the initial measurements and their true values. Second, a new coarse-to-fine iterative algorithm is developed that minimizes the GE and finds the camera parameters. Numerical experiments show that the coarse-to-fine algorithm is efficient and effective. Third, a new landmark survey based on a geographic information system (GIS) rather than field measurements is presented. The GIS landmark survey is an effective and efficient way to obtain landmark world coordinates for camera calibrations in these experiments. Validation of this technique is achieved by the data collected by a NASA/Earth Observing System satellite and an instrumented aircraft. This paper builds on previous research and details the calibration and 3D reconstructions.
APA, Harvard, Vancouver, ISO, and other styles
25

Cramer, M., H. J. Przybilla, and A. Zurhorst. "UAV CAMERAS: OVERVIEW AND GEOMETRIC CALIBRATION BENCHMARK." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 85–92. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-85-2017.

Full text
Abstract:
Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.
APA, Harvard, Vancouver, ISO, and other styles
26

Urquhart, Bryan, Ben Kurtz, and Jan Kleissl. "Sky camera geometric calibration using solar observations." Atmospheric Measurement Techniques 9, no. 9 (September 5, 2016): 4279–94. http://dx.doi.org/10.5194/amt-9-4279-2016.

Full text
Abstract:
Abstract. A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. The performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. Calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.
APA, Harvard, Vancouver, ISO, and other styles
27

Lim, P. C., J. Seo, J. Son, and T. Kim. "ANALYSIS OF ORIENTATION ACCURACY OF AN UAV IMAGE ACCORDING TO CAMERA CALIBRATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 4, 2019): 437–42. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-437-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Utilization of an UAV is increasing because of its easy operation and time saving advantages. Compared with other remote sensing platforms, the biggest difference of a small UAV is the unstable flight attitude due to platform stability. UAVs are equipped with a commercial grade camera, unlike expensive cameras mounted on manned aircraft or satellite platforms. The quality of the map is determined by the characteristics of an UAV and camera performance. In this study, the accuracy of orientation parameters according to UAV camera calibration options was analysed. The camera calibration options were no calibration, self-calibration and calibration by a public calibration toolkit with manual corner measurement. We used four different type of UAVs and three type of SWs. Interior and exterior orientation parameters according to the camera calibration options were obtained from each software. The result of processing by each camera calibration option was different from each other. This may indicate that the UAV camera calibration was not performed accurately and still needed further improvement.</p>
APA, Harvard, Vancouver, ISO, and other styles
28

Tan, Jia Hai, Peng Yu Li, You Shan Qu, Ya Meng Han, Ya Li Yu, and Wei Wang. "Design of Calibration System for a Great Quantity of High Precision Scientific Grade CCD Cameras." Applied Mechanics and Materials 331 (July 2013): 326–30. http://dx.doi.org/10.4028/www.scientific.net/amm.331.326.

Full text
Abstract:
For the calibration of a great quantity of scientific grade CCD cameras in the high energy physics system, a scientific grade CCD camera calibration system with high precision and efficiency is designed. The designed camera calibration system consists of a 1053nm nanosecond solid-state laser, a knife, a double-integrating sphere, a laser power meter, a signal generator, a computer with its data processing software. Key technical parameters of scientific grade CCD under the condition of 1053nm optical pulses that are the modulation, contrast, defects, optical dynamic range, non-linear response can be calibrated by the designed calibration system. A double-integrating sphere with high uniformity and stability is designed as a uniform light source, which improves the calibrating performance and accuracy. Experimental results show the system designed in this paper can calibrate the large number of scientific grade CCD cameras quickly and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
29

Hardner, M., and D. Schneider. "DEVELOPMENT OF A MULTI CAMERA CALIBRATION FOR AN ANALYSIS AND MANIPULATION SYSTEM OF BIOLOGICAL SAMPLES IN PETRI DISHES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W18 (November 29, 2019): 67–72. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w18-67-2019.

Full text
Abstract:
Abstract. Cameras can provide accurate 3D positioning for a robotic system. Calibration of interior orientation as well as the relative orientation when using multiple cameras is a necessary step. Integration of the automatic calibration process into the robot system is useful to provide a simple system for the user. The paper presents a method for calibrating a multi camera system using the open source Ceres solver and compares it to a commercial software. The motivation is a measurement system for laboratory automation where biological samples in Petri dishes will be manipulated with different tools. For this application we will examine the camera setup, show first results of the calibration and the possible accuracy. Lastly we will draw conclusions for the final system prototype and necessary improvements to be implemented in order to provide accurate measurements.
APA, Harvard, Vancouver, ISO, and other styles
30

Van Crombrugge, Izaak, Rudi Penne, and Steve Vanlanduit. "Extrinsic Camera Calibration with Line-Laser Projection." Sensors 21, no. 4 (February 5, 2021): 1091. http://dx.doi.org/10.3390/s21041091.

Full text
Abstract:
Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Xinhua, Jie Tian, Hailan Kuang, and Xiaolin Ma. "A Stereo Calibration Method of Multi-Camera Based on Circular Calibration Board." Electronics 11, no. 4 (February 17, 2022): 627. http://dx.doi.org/10.3390/electronics11040627.

Full text
Abstract:
In the application of 3D reconstruction of multi-cameras, it is necessary to calibrate the camera used separately, and at the same time carry out multi-stereo calibration, and the calibration accuracy directly affects the effect of the 3D reconstruction of the system. Many researchers focus on the optimization of the calibration algorithm and the improvement of calibration accuracy after obtaining the calibration plate pattern coordinates, ignoring the impact of calibration on the accuracy of the calibration board pattern coordinate extraction. Therefore, this paper proposes a multi-camera stereo calibration method based on circular calibration plate focusing on the extraction of pattern features during the calibration process. This method preforms the acquisition of the subpixel edge acquisition based on Franklin matrix and circular feature extraction of the circular calibration plate pattern collected by the camera, and then combines the Zhang’s calibration method to calibrate the camera. Experimental results show that compared with the traditional calibration method, the method has better calibration effect and calibration accuracy, and the average reprojection error of the multi-camera is reduced by more than 0.006 pixels.
APA, Harvard, Vancouver, ISO, and other styles
32

Cumani, Aldo, and Antonio Guiducci. "Geometric camery calibration: the virtual camera approach." Machine Vision and Applications 8, no. 6 (November 1995): 375–84. http://dx.doi.org/10.1007/bf01213499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cumani, Aldo, and Antonio Guiducci. "Geometric camery calibration: the virtual camera approach." Machine Vision and Applications 8, no. 6 (December 1, 1995): 375–84. http://dx.doi.org/10.1007/s001380050019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, J. L. "AN ITERATIVE APPROACH FOR SELF-CALIBRATING BUNDLE ADJUSTMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 97–102. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-97-2020.

Full text
Abstract:
Abstract. Obtaining accurate image interior and exterior orientations is the key to improve 3D measurement accuracy besides reliable and accurate image matching. A majority of cameras used for those tasks are non-metric cameras. Non-metric cameras commonly suffer various distortions. Generally, there are two ways to remove these distortions: 1) conducting prior camera calibration in a controlled environment; 2) applying self-calibrating bundle adjustment in the application environment. Both approaches have their advantages and disadvantages but one thing is common that there is no universal calibration model available so far which can remove all sorts of distortions on images and systemic errors of image orientations. Instead of developing additional calibration models for camera calibration and self-calibrating adjustment, this paper presents a novel approach which applies self-calibrating bundle adjustment in an iterative fashion: after performing a conventional self-calibrating bundle adjustment, the image coordinates of tie points are re-calculated using the newly obtained self-calibration model coefficients, and the self-calibrating bundle adjustment is applied again in the hope that the remaining distortions and systematic errors will be reduced further within next a few iterations. Using a “virtual image” concept this iterative approach does not require to resample images or/and re-measure tie points during iterations, only costs a few additional iterations computational resource. Several trails under various application environments are conducted using this proposed iterative approach and the results indicate that not only the distortions can be reduced further but also image orientations become much stable after a few iterations.
APA, Harvard, Vancouver, ISO, and other styles
35

Hillemann, M., and B. Jutzi. "UCalMiCeL &ndash; UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W3 (August 18, 2017): 17–24. http://dx.doi.org/10.5194/isprs-annals-iv-2-w3-17-2017.

Full text
Abstract:
Unmanned Aerial Vehicle (UAV) with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the <i>Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL)</i>. The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30&amp;thinsp;<i>mm</i>. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8&amp;deg; and 6.4&amp;thinsp;<i>mm</i>.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhu, Yan, and Keith Cherkauer. "Pixel-Based Calibration and Atmospheric Correction of a UAS-Mounted Thermal Camera for Land Surface Temperature Measurements." Transactions of the ASABE 64, no. 6 (2021): 2137–50. http://dx.doi.org/10.13031/trans.14631.

Full text
Abstract:
HighlightsA novel pixel-based calibration algorithm and an atmospheric correction method are developed.Application of the calibration methods reduces the RMSE of measurements to less than 1.32°C.The calibrations facilitate stitching of images together to form whole-field mosaics.Abstract. Thermal imagery can be used to provide insight into the water stress status and evapotranspiration demand of crops, but satellite-based sensors are generally too coarse spatially and too infrequent temporally to provide information of use for the management of specific fields. Thermal cameras mounted on small unmanned aerial systems (UAS) have potential to provide canopy temperature information at high spatial and temporal resolutions useful for crop management; however, without appropriate camera corrections, the measurement biases of these uncooled thermal cameras can be larger than ±5°C. Such uncertainty can render such camera measurements useless. In this research, a pixel-based (non-uniformity) calibration algorithm and an atmospheric correction method based on in-field approximate blackbody sources (water targets) were developed for a thermal camera. The objective was to improve the temperature measurement accuracy of the thermal camera on various land surfaces including soil and vegetation. With sufficient accuracy, temperature measurements can be used for the estimation of latent heat flux of field crops in the future. The thermal camera was first calibrated in a laboratory setting where the camera and environmental conditions were controlled. The results indicated that in the range between 10°C and 45°C, the calibrated temperatures were accurate, with an average bias of 1.76°C, and had a high linear correlation with reference temperatures (water target temperatures) (R2 &gt; 0.99). Variability of measurements was also better constrained. In-field atmospheric correction is also important for obtaining high-accuracy thermal imagery. By applying both pixel-based calibration and atmospheric corrections, the RMSE (root mean square error) of validation targets from two dates in 2017 was reduced from 4.56°C and 6.36°C before calibration to 1.32°C and 1.24°C after calibration. The calibration process also increased the range of temperatures in the imagery, which enhanced contrast and may help with identification of tie-points and stitching of images together to form whole-field mosaics. Keywords: Atmospheric correction, Pixel-based calibration, Thermal remote sensing, UAS, Water targets.
APA, Harvard, Vancouver, ISO, and other styles
37

Xiong, Jiu Long, Jun Ying Xia, Xian Quan Xu, and Zhen Tian. "A Novel Method of Stereo Camera Calibration Using BP Neural Network." Applied Mechanics and Materials 29-32 (August 2010): 2692–97. http://dx.doi.org/10.4028/www.scientific.net/amm.29-32.2692.

Full text
Abstract:
Camera calibration establishes the relationship between 2D coordinates in the image and 3D coordinates in the 3D world. BP neural network can model non-linear relationship, and therefore was used for calibrating camera by avoiding the non-linear factors of the camera in this paper. The calibration results are compared with the results of Tsai’s two stage method. The comparison show that calibration method based BP neural network improved the calibration accuracy.
APA, Harvard, Vancouver, ISO, and other styles
38

Gašparović, M., and D. Gajski. "TWO-STEP CAMERA CALIBRATION METHOD DEVELOPED FOR MICRO UAV'S." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 829–33. http://dx.doi.org/10.5194/isprs-archives-xli-b1-829-2016.

Full text
Abstract:
The development of unmanned aerial vehicles (UAVs) and continuous price reduction of unmanned systems attracted us to this research. Professional measuring systems are dozens of times more expensive and often heavier than "amateur", non-metric UAVs. For this reason, we tested the DJI Phantom 2 Vision Plus UAV. Phantom’s smaller mass and velocity can develop less kinetic energy in relation to the professional measurement platforms, which makes it potentially less dangerous for use in populated areas. In this research, we wanted to investigate the ability of such non-metric UAV and find the procedures under which this kind of UAV may be used for the photogrammetric survey. It is important to emphasize that UAV is equipped with an ultra wide-angle camera with 14MP sensor. Calibration of such cameras is a complex process. In the research, a new two-step process is presented and developed, and the results are compared with standard one-step camera calibration procedure. Two-step process involves initially removed distortion on all images, and then uses these images in the phototriangulation with self-calibration. The paper presents statistical indicators which proved that the proposed two-step process is better and more accurate procedure for calibrating those types of cameras than standard one-step calibration. Also, we suggest two-step calibration process as the standard for ultra-wideangle cameras for unmanned aircraft.
APA, Harvard, Vancouver, ISO, and other styles
39

Gašparović, M., and D. Gajski. "TWO-STEP CAMERA CALIBRATION METHOD DEVELOPED FOR MICRO UAV'S." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 829–33. http://dx.doi.org/10.5194/isprsarchives-xli-b1-829-2016.

Full text
Abstract:
The development of unmanned aerial vehicles (UAVs) and continuous price reduction of unmanned systems attracted us to this research. Professional measuring systems are dozens of times more expensive and often heavier than "amateur", non-metric UAVs. For this reason, we tested the DJI Phantom 2 Vision Plus UAV. Phantom’s smaller mass and velocity can develop less kinetic energy in relation to the professional measurement platforms, which makes it potentially less dangerous for use in populated areas. In this research, we wanted to investigate the ability of such non-metric UAV and find the procedures under which this kind of UAV may be used for the photogrammetric survey. It is important to emphasize that UAV is equipped with an ultra wide-angle camera with 14MP sensor. Calibration of such cameras is a complex process. In the research, a new two-step process is presented and developed, and the results are compared with standard one-step camera calibration procedure. Two-step process involves initially removed distortion on all images, and then uses these images in the phototriangulation with self-calibration. The paper presents statistical indicators which proved that the proposed two-step process is better and more accurate procedure for calibrating those types of cameras than standard one-step calibration. Also, we suggest two-step calibration process as the standard for ultra-wideangle cameras for unmanned aircraft.
APA, Harvard, Vancouver, ISO, and other styles
40

Barazzetti, L., L. Mussio, F. Remondino, and M. Scaioni. "TARGETLESS CAMERA CALIBRATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVIII-5/W16 (September 10, 2012): 335–42. http://dx.doi.org/10.5194/isprsarchives-xxxviii-5-w16-335-2011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Geyer, C., and K. Daniilidis. "Paracatadioptric camera calibration." IEEE Transactions on Pattern Analysis and Machine Intelligence 24, no. 5 (May 2002): 687–95. http://dx.doi.org/10.1109/34.1000241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

CIOCOIU, Titus, Florin MOLDOVEANU, and Caius SULIMAN. "CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM." SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE 18, no. 1 (June 24, 2016): 227–32. http://dx.doi.org/10.19062/2247-3173.2016.18.1.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Detchev, I., M. Mazaheri, S. Rondeel, and A. Habib. "Calibration of multi-camera photogrammetric systems." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1 (November 7, 2014): 101–8. http://dx.doi.org/10.5194/isprsarchives-xl-1-101-2014.

Full text
Abstract:
Due to the low-cost and off-the-shelf availability of consumer grade cameras, multi-camera photogrammetric systems have become a popular means for 3D reconstruction. These systems can be used in a variety of applications such as infrastructure monitoring, cultural heritage documentation, biomedicine, mobile mapping, as-built architectural surveys, etc. In order to ensure that the required precision is met, a system calibration must be performed prior to the data collection campaign. This system calibration should be performed as efficiently as possible, because it may need to be completed many times. Multi-camera system calibration involves the estimation of the interior orientation parameters of each involved camera and the estimation of the relative orientation parameters among the cameras. This paper first reviews a method for multi-camera system calibration with built-in relative orientation constraints. A system stability analysis algorithm is then presented which can be used to assess different system calibration outcomes. The paper explores the required calibration configuration for a specific system in two situations: major calibration (when both the interior orientation parameters and relative orientation parameters are estimated), and minor calibration (when the interior orientation parameters are known a-priori and only the relative orientation parameters are estimated). In both situations, system calibration results are compared using the system stability analysis methodology.
APA, Harvard, Vancouver, ISO, and other styles
44

Hanel, A., and U. Stilla. "STRUCTURE-FROM-MOTION FOR CALIBRATION OF A VEHICLE CAMERA SYSTEM WITH NON-OVERLAPPING FIELDS-OF-VIEW IN AN URBAN ENVIRONMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 31, 2017): 181–88. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-181-2017.

Full text
Abstract:
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhou, Chenchen, Shaoqi Wang, Yi Cao, Shuang-Hua Yang, and Bin Bai. "Online Pyrometry Calibration for Industrial Combustion Process Monitoring." Processes 10, no. 9 (August 26, 2022): 1694. http://dx.doi.org/10.3390/pr10091694.

Full text
Abstract:
Temperature and its distribution are crucial for combustion monitoring and control. For this application, digital camera-based pyrometers become increasingly popular, due to its relatively low cost. However, these pyrometers are not universally applicable due to the dependence of calibration. Compared with pyrometers, monitoring cameras exist in all most every combustion chamber. Although these cameras, theologically, have the ability to measure temperature, due to lack of calibration they are only used for visualization to support the decisions of operators. Almost all existing calibration methods are laboratory-based, and hence cannot calibrate a camera in operation. This paper proposes an online calibration method. It uses a pre-calibrated camera as a standard pyrometer to calibrate another camera in operation. The calibration is based on a photo taken by the pyrometry-camera at a position close to the camera in operation. Since the calibration does not affect the use of the camera in operation, it sharply reduces the cost and difficulty of pyrometer calibration. In this paper, a procedure of online calibration is proposed, and the advice about how to set camera parameters is given. Besides, the radio pyrometry is revised for a wider temperature range. The online calibration algorithm is developed based on two assumptions for images of the same flame taken in proximity: (1) there are common regions between the two images taken at close position; (2) there are some constant characteristic temperatures between the two-dimensional temperature distributions of the same flame taken from different angles. And those two assumptions are verified in a real industrial plants. Based on these two verified features, a temperature distribution matching algorithm is developed to calibrate pyrometers online. This method was tested and validated in an industrial-scale municipal solid waste incinerator. The accuracy of the calibrated pyrometer is sufficient for flame monitoring and control.
APA, Harvard, Vancouver, ISO, and other styles
46

Yastikli, Naci, and Esra Guler. "Performance evaluation of thermographic cameras for photogrammetric documentation of historical buildings." Boletim de Ciências Geodésicas 19, no. 4 (December 2013): 711–28. http://dx.doi.org/10.1590/s1982-217020130004000012.

Full text
Abstract:
Thermographic cameras record temperatures emitted by objects in the infrared region. These thermal images can be used for texture analysis and deformation caused by moisture and isolation problems. For accurate geometric survey of the deformations, the geometric calibration and performance evaluation of the thermographic camera should be conducted properly. In this study, an approach is proposed for the geometric calibration of the thermal cameras for the geometric survey of deformation caused by moisture. A 3D test object was designed and used for the geometric calibration and performance evaluation. The geometric calibration parameters, including focal length, position of principal point, and radial and tangential distortions, were determined for both the thermographic and the digital camera. The digital image rectification performance of the thermographic camera was tested for photogrammetric documentation of deformation caused by moisture. The obtained results from the thermographic camera were compared with the results from digital camera based on the experimental investigation performed on a study area.
APA, Harvard, Vancouver, ISO, and other styles
47

Kwan, Elliott, and Hong Hua. "Calibration of transverse ray and pupil aberrations for light field cameras." Applied Optics 61, no. 24 (August 11, 2022): 6974. http://dx.doi.org/10.1364/ao.465129.

Full text
Abstract:
The accuracy of reconstructing depth maps or performing digital refocusing in light field cameras depends largely on how well the spatial and angular samples of light rays can be obtained. Ray sample errors induced by optical aberrations in a light field camera may be digitally corrected using the ray tracing data when its nominal lens design is available. However, the most commonly nominal lens prescription is not accessible to end users. Additionally, even if available, due to tolerances in optomechanical design, the ray tracing data can be inaccurate. We propose a calibration method based on measurements of fiducial markers on a checkerboard for modeling the imaging properties of light field cameras. The calibration accounts for vignetting, transverse ray errors, as well as pupil aberration, and can be applied to light field camera modeling of arbitrary pupil sampling systems. We further demonstrate the utility of the method for calibrating a tri-aperture camera that captures simultaneous stereo views via artificially induced transverse ray errors.
APA, Harvard, Vancouver, ISO, and other styles
48

Kemper, G., B. Melykuti, and C. Yu. "CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 3, 2016): 205–9. http://dx.doi.org/10.5194/isprsarchives-xli-b1-205-2016.

Full text
Abstract:
Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. &lt;br&gt;&lt;br&gt; This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. &lt;br&gt;&lt;br&gt; The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. &lt;br&gt;&lt;br&gt; The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. &lt;br&gt;&lt;br&gt; Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. &lt;br&gt;&lt;br&gt; Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.
APA, Harvard, Vancouver, ISO, and other styles
49

Kemper, G., B. Melykuti, and C. Yu. "CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 3, 2016): 205–9. http://dx.doi.org/10.5194/isprs-archives-xli-b1-205-2016.

Full text
Abstract:
Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. <br><br> This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. <br><br> The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. <br><br> The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. <br><br> Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. <br><br> Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.
APA, Harvard, Vancouver, ISO, and other styles
50

Brunn, A., and T. Meyer. "CALIBRATION OF A MULTI-CAMERA ROVER." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (June 15, 2016): 445–52. http://dx.doi.org/10.5194/isprs-archives-xli-b5-445-2016.

Full text
Abstract:
Multi-Camera-Rover are recently coming up for usual terrestrical surveying tasks. This technique is new for the surveyors. Although photogrammetric specialists realize the benefits of such systems immediately, surveyors have difficulties to find efficient usages. To approach this new measurement systems the technique has to be understood and the confidence on the accuray has to grow. In this paper we analyze the accuracy of a Multi-Camera-Rover using an indoor testfield by photogrammetric algorithms. The results show that the knowledge of the interior orientation parameter of the cameras and the relative orientation of the cameras is essential for precise geometric reconstructions. Knowing these additional data, high accurate results become possible.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography