To see the other types of publications on this topic, follow the link: Depth camera.

Journal articles on the topic 'Depth camera'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Depth camera.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yamazoe, Hirotake, Hiroshi Habe, Ikuhisa Mitsugami, and Yasushi Yagi. "Depth error correction for projector-camera based consumer depth cameras." Computational Visual Media 4, no. 2 (March 14, 2018): 103–11. http://dx.doi.org/10.1007/s41095-017-0103-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Zhe, Zhaozong Meng, Nan Gao, and Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target." Sensors 19, no. 13 (July 8, 2019): 3008. http://dx.doi.org/10.3390/s19133008.

Full text
Abstract:
Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
3

Chiu, Chuang-Yuan, Michael Thelwell, Terry Senior, Simon Choppin, John Hart, and Jon Wheat. "Comparison of depth cameras for three-dimensional reconstruction in medicine." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 233, no. 9 (June 28, 2019): 938–47. http://dx.doi.org/10.1177/0954411919859922.

Full text
Abstract:
KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor.
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Yuxiang, Xiang Meng, and Mingyu Gao. "Vision System of Mobile Robot Combining Binocular and Depth Cameras." Journal of Sensors 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/4562934.

Full text
Abstract:
In order to optimize the three-dimensional (3D) reconstruction and obtain more precise actual distances of the object, a 3D reconstruction system combining binocular and depth cameras is proposed in this paper. The whole system consists of two identical color cameras, a TOF depth camera, an image processing host, a mobile robot control host, and a mobile robot. Because of structural constraints, the resolution of TOF depth camera is very low, which difficultly meets the requirement of trajectory planning. The resolution of binocular stereo cameras can be very high, but the effect of stereo matching is not ideal for low-texture scenes. Hence binocular stereo cameras also difficultly meet the requirements of high accuracy. In this paper, the proposed system integrates depth camera and stereo matching to improve the precision of the 3D reconstruction. Moreover, a double threads processing method is applied to improve the efficiency of the system. The experimental results show that the system can effectively improve the accuracy of 3D reconstruction, identify the distance from the camera accurately, and achieve the strategy of trajectory planning.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Huang, and Zhao. "A New Model of RGB-D Camera Calibration Based On 3D Control Field." Sensors 19, no. 23 (November 21, 2019): 5082. http://dx.doi.org/10.3390/s19235082.

Full text
Abstract:
With extensive application of RGB-D cameras in robotics, computer vision, and many other fields, accurate calibration becomes more and more critical to the sensors. However, most existing models for calibrating depth and the relative pose between a depth camera and an RGB camera are not universally applicable to many different kinds of RGB-D cameras. In this paper, by using the collinear equation and space resection of photogrammetry, we present a new model to correct the depth and calibrate the relative pose between depth and RGB cameras based on a 3D control field. We establish a rigorous relationship model between the two cameras; then, we optimize the relative parameters of two cameras by least-squares iteration. For depth correction, based on the extrinsic parameters related to object space, the reference depths are calculated by using a collinear equation. Then, we calibrate the depth measurements with consideration of the distortion of pixels in depth images. We apply Kinect-2 to verify the calibration parameters by registering depth and color images. We test the effect of depth correction based on 3D reconstruction. Compared to the registration results from a state-of-the-art calibration model, the registration results obtained with our calibration parameters improve dramatically. Likewise, the performances of 3D reconstruction demonstrate obvious improvements after depth correction.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Tian-Long, Lin Ao, Jie Zheng, and Zhi-Bin Sun. "Reconstructing Depth Images for Time-of-Flight Cameras Based on Second-Order Correlation Functions." Photonics 10, no. 11 (October 31, 2023): 1223. http://dx.doi.org/10.3390/photonics10111223.

Full text
Abstract:
Depth cameras are closely related to our daily lives and have been widely used in fields such as machine vision, autonomous driving, and virtual reality. Despite their diverse applications, depth cameras still encounter challenges like multi-path interference and mixed pixels. Compared to traditional sensors, depth cameras have lower resolution and a lower signal-to-noise ratio. Moreover, when used in environments with scattering media, object information scatters multiple times, making it difficult for time-of-flight (ToF) cameras to obtain effective object data. To tackle these issues, we propose a solution that combines ToF cameras with second-order correlation transform theory. In this article, we explore the utilization of ToF camera depth information within a computational correlated imaging system under ambient light conditions. We integrate compressed sensing and non-training neural networks with ToF technology to reconstruct depth images from a series of measurements at a low sampling rate. The research indicates that by leveraging the depth data collected by the camera, we can recover negative depth images. We analyzed and addressed the reasons behind the generation of negative depth images. Additionally, under undersampling conditions, the use of reconstruction algorithms results in a higher peak signal-to-noise ratio compared to images obtained from the original camera. The results demonstrate that the introduced second-order correlation transformation can effectively reduce noise originating from the ToF camera itself and direct ambient light, thereby enabling the use of ToF cameras in complex environments such as scattering media.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Yang, Danqing Chen, Jun Wu, Mingyi Huang, and Yubin Weng. "Calibration of RGB-D Camera Using Depth Correction Model." Journal of Physics: Conference Series 2203, no. 1 (February 1, 2022): 012032. http://dx.doi.org/10.1088/1742-6596/2203/1/012032.

Full text
Abstract:
Abstract This paper proposes a calibration method of RGB-D camera, especially its depth camera. First, use a checkerboard calibration board under auxiliary Infrared light source to collect calibration images. Then, the internal and external parameters of the depth camera are calculated by Zhang’s calibration method, which improves the accuracy of the internal parameter. Next, the depth correction model is proposed to directly calibrate the distortion of the depth image, which is more intuitive and faster than the disparity distortion correction model. This method is simple, high-precision, and suitable for most depth cameras.
APA, Harvard, Vancouver, ISO, and other styles
8

Tu, Li-fen, and Qi Peng. "Method of Using RealSense Camera to Estimate the Depth Map of Any Monocular Camera." Journal of Electrical and Computer Engineering 2021 (May 18, 2021): 1–9. http://dx.doi.org/10.1155/2021/9152035.

Full text
Abstract:
Robot detection, recognition, positioning, and other applications require not only real-time video image information but also the distance from the target to the camera, that is, depth information. This paper proposes a method to automatically generate any monocular camera depth map based on RealSense camera data. By using this method, any current single-camera detection system can be upgraded online. Without changing the original system, the depth information of the original monocular camera can be obtained simply, and the transition from 2D detection to 3D detection can be realized. In order to verify the effectiveness of the proposed method, a hardware system was constructed using the Micro-vision RS-A14K-GC8 industrial camera and the Intel RealSense D415 depth camera, and the depth map fitting algorithm proposed in this paper was used to test the system. The results show that, except for a few depth-missing areas, the results of other areas with depth are still good, which can basically describe the distance difference between the target and the camera. In addition, in order to verify the scalability of the method, a new hardware system was constructed with different cameras, and images were collected in a complex farmland environment. The generated depth map was good, which could basically describe the distance difference between the target and the camera.
APA, Harvard, Vancouver, ISO, and other styles
9

Unger, Michael, Adrian Franke, and Claire Chalopin. "Automatic depth scanning system for 3D infrared thermography." Current Directions in Biomedical Engineering 2, no. 1 (September 1, 2016): 369–72. http://dx.doi.org/10.1515/cdbme-2016-0162.

Full text
Abstract:
AbstractInfrared thermography can be used as a pre-, intra- and post-operative imaging technique during medical treatment of patients. Modern infrared thermal cameras are capable of acquiring images with a high sensitivity of 10 mK and beyond. They provide a planar image of an examined 3D object in which this high sensitivity is only reached within a plane perpendicular to the camera axis and defined by the focus of the lens. Out of focus planes are blurred and temperature values are inaccurate. A new 3D infrared thermography system is built by combining a thermal camera with a depth camera. Multiple images at varying focal planes are acquired with the infrared camera using a motorized system. The sharp regions of individual images are projected onto the 3D object’s surface obtained by the depth camera. The system evaluation showed that deviation between measured temperature values and a ground truth is reduced with our system.
APA, Harvard, Vancouver, ISO, and other styles
10

Haider, Azmi, and Hagit Hel-Or. "What Can We Learn from Depth Camera Sensor Noise?" Sensors 22, no. 14 (July 21, 2022): 5448. http://dx.doi.org/10.3390/s22145448.

Full text
Abstract:
Although camera and sensor noise are often disregarded, assumed negligible or dealt with in the context of denoising, in this paper we show that significant information can actually be deduced from camera noise about the captured scene and the objects within it. Specifically, we deal with depth cameras and their noise patterns. We show that from sensor noise alone, the object’s depth and location in the scene can be deduced. Sensor noise can indicate the source camera type, and within a camera type the specific device used to acquire the images. Furthermore, we show that noise distribution on surfaces provides information about the light direction within the scene as well as allows to distinguish between real and masked faces. Finally, we show that the size of depth shadows (missing depth data) is a function of the object’s distance from the background, its distance from the camera and the object’s size. Hence, can be used to authenticate objects location in the scene. This paper provides tools and insights into what can be learned from depth camera sensor noise.
APA, Harvard, Vancouver, ISO, and other styles
11

Mansour, Mostafa, Pavel Davidson, Oleg Stepanov, and Robert Piché. "Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: A Computer Vision Approach." Remote Sensing 11, no. 17 (August 23, 2019): 1990. http://dx.doi.org/10.3390/rs11171990.

Full text
Abstract:
Binocular disparity and motion parallax are the most important cues for depth estimation in human and computer vision. Here, we present an experimental study to evaluate the accuracy of these two cues in depth estimation to stationary objects in a static environment. Depth estimation via binocular disparity is most commonly implemented using stereo vision, which uses images from two or more cameras to triangulate and estimate distances. We use a commercial stereo camera mounted on a wheeled robot to create a depth map of the environment. The sequence of images obtained by one of these two cameras as well as the camera motion parameters serve as the input to our motion parallax-based depth estimation algorithm. The measured camera motion parameters include translational and angular velocities. Reference distance to the tracked features is provided by a LiDAR. Overall, our results show that at short distances stereo vision is more accurate, but at large distances the combination of parallax and camera motion provide better depth estimation. Therefore, by combining the two cues, one obtains depth estimation with greater range than is possible using either cue individually.
APA, Harvard, Vancouver, ISO, and other styles
12

Zeller, N., F. Quint, and U. Stilla. "Calibration and accuracy analysis of a focused plenoptic camera." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3 (August 7, 2014): 205–12. http://dx.doi.org/10.5194/isprsannals-ii-3-205-2014.

Full text
Abstract:
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Sung-Yeol, Eun-Kyung Lee, and Yo-Sung Ho. "Generation of ROI Enhanced Depth Maps Using Stereoscopic Cameras and a Depth Camera." IEEE Transactions on Broadcasting 54, no. 4 (December 2008): 732–40. http://dx.doi.org/10.1109/tbc.2008.2002338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zaman, DMS, Md Hasan Maruf, Md Ashiqur Rahman, Jannatul Ferdousy, and ASM Shihavuddin. "Food Depth Estimation Using Low-Cost Mobile-Based System for Real-Time Dietary Assessment." GUB Journal of Science and Engineering 6, no. 1 (October 13, 2020): 1–11. http://dx.doi.org/10.3329/gubjse.v6i1.52044.

Full text
Abstract:
Real time estimation of nutrition intake from regular food items using mobile-based applications could be a breakthrough in creating public awareness of threats in overeating or faulty food choices. The bottleneck in implementing such systems is to effectively estimate the depths of the food items which is essential to calculate the volumes of foods. Volumes and density of food items can be used to estimate the weights of food eaten and their corresponding nutrition contents. Without specific depth sensors, it is very difficult to estimate the depth of any object from a single camera. Such sensors are equipped only in very advanced and expensive mobile devices. This work investigates the possibilities of using regular cameras to calculate the same using a specific frame structure. We proposed a controlled camera setup to acquire overlapping images of the food from different positions already calibrated to estimate the depths. The results were compared with the Kinect device’s depth measures to show the efficiency of the proposed method. We further investigated the optimum number of camera positions, their corresponding angles, and distances from the object to propose the best configuration for such a controlled system of image acquisition with regular mobile cameras. Overall the proposed method presents a low-cost solution to the depth estimation problem and opens up the possibilities for mobile-based apps for dietary assessment for various health-related problem-solving. GUB JOURNAL OF SCIENCE AND ENGINEERING, Vol 6(1), Dec 2019 P 1-11
APA, Harvard, Vancouver, ISO, and other styles
15

Hu, Yuhang, Manhong Yao, Zhuobin Huang, Junzheng Peng, Zibang Zhang, and Jingang Zhong. "Full-Resolution Light-Field Camera via Fourier Dual Photography." Photonics 9, no. 8 (August 10, 2022): 559. http://dx.doi.org/10.3390/photonics9080559.

Full text
Abstract:
Conventional light-field cameras with a micro-lens array suffer from resolution trade-off and shallow depth of field. Here we develop a full-resolution light-field camera based on dual photography. We extend the principle of dual photography from real space to Fourier space for obtaining two-dimensional (2D) angular information of the light-field. It uses a spatial light modulator at the image plane as a virtual 2D detector to record the 2D spatial distribution of the image, and a real 2D detector at the Fourier plane of the image to record the angles of the light rays. The Fourier-spectrum signals recorded by each pixel of the real 2D detector can be used to reconstruct a perspective image through single-pixel imaging. Based on the perspective images reconstructed by different pixels, we experimentally demonstrated that the camera can digitally refocus on objects at different depths. The camera can achieve light-field imaging with full resolution and provide an extreme depth of field. The method provides a new idea for developing full-resolution light-field cameras.
APA, Harvard, Vancouver, ISO, and other styles
16

Burnett, Bryan, and Steven Blaauw. "Macro Imaging with Digital Cameras." Microscopy Today 11, no. 4 (August 2003): 32–35. http://dx.doi.org/10.1017/s1551929500053050.

Full text
Abstract:
Advances in charged couple device (CCD) design, low cast processor power, cheap memory and dropping prices of digital cameras over the last few years have made the CCD digital camera art attractive alternative to the film camera for many imaging applications. This is especially true in macro imaging where it appears likely that digital cameras will replace film cameras (curiously, Long (2001), says otherwise). As will be described here, a digital camera equipped with a quality macro-zoom lens, generates images with a depth of field (e.g., Fig. 1) that greatly surpass images produced by a film camera with a comparable lens system.
APA, Harvard, Vancouver, ISO, and other styles
17

Hou, A. Lin, Ying Geng, Xue Cui, Wen Ju Yuan, and Feng Guang Shi. "Measurement of the Depth Distance Based on the Binocular Stereo Vision." Advanced Materials Research 468-471 (February 2012): 1895–98. http://dx.doi.org/10.4028/www.scientific.net/amr.468-471.1895.

Full text
Abstract:
The measurement system of the depth distance based on binocular stereo vision is proposed. The model of camera imaging is established and the process of camera calibration is described. Interior and external parameters of two cameras are calculated by HALCON. Then the depth distance can be obtained by the parallax measurement of the image pairs. The simulation experiments have been done and the results demonstrated that the measuring method is feasible.
APA, Harvard, Vancouver, ISO, and other styles
18

Krishnan, Aravindhan K., and Srikanth Saripalli. "Cross-Calibration of RGB and Thermal Cameras with a LIDAR for RGB-Depth-Thermal Mapping." Unmanned Systems 05, no. 02 (April 2017): 59–78. http://dx.doi.org/10.1142/s2301385017500054.

Full text
Abstract:
We present a method for calibrating the extrinsic parameters between a RGB camera, a thermal camera, and a LIDAR. The calibration procedure we use is common to both the RGB and thermal cameras. The extrinsic calibration procedure assumes that the cameras are geometrically calibrated. To aid the geometric calibration of the thermal camera, we use a calibration target made of black-and-white melamine that looks like a checkerboard pattern in the thermal and RGB images. For the extrinsic calibration, we place a circular calibration target in the common field of view of the cameras and the LIDAR and compute the extrinsic parameters by minimizing an objective function that aligns the edges of the circular target in the LIDAR to its corresponding edges in the RGB and thermal images. We illustrate the convexity of the objective function and discuss the convergence of the algorithm. We then identify the various sources of coloring errors (after cross-calibration) as (a) noise in the LIDAR points, (b) error in the intrinsic parameters of the camera, (c) error in the translation parameters between the LIDAR and the camera and (d) error in the rotation parameters between the LIDAR and the camera. We analyze the contribution of these errors with respect to the coloring of a 3D point. We illustrate that these errors are related to the depth of the 3D point considered — with errors (a), (b), and (c) being inversely proportional to the depth, and error (d) being directly proportional to the depth.
APA, Harvard, Vancouver, ISO, and other styles
19

Furmonas, Justas, John Liobe, and Vaidotas Barzdenas. "Analytical Review of Event-Based Camera Depth Estimation Methods and Systems." Sensors 22, no. 3 (February 5, 2022): 1201. http://dx.doi.org/10.3390/s22031201.

Full text
Abstract:
Event-based cameras have increasingly become more commonplace in the commercial space as the performance of these cameras has also continued to increase to the degree where they can exponentially outperform their frame-based counterparts in many applications. However, instantiations of event-based cameras for depth estimation are sparse. After a short introduction detailing the salient differences and features of an event-based camera compared to that of a traditional, frame-based one, this work summarizes the published event-based methods and systems known to date. An analytical review of these methods and systems is performed, justifying the conclusions drawn. This work is concluded with insights and recommendations for further development in the field of event-based camera depth estimation.
APA, Harvard, Vancouver, ISO, and other styles
20

Wei, Y., Z. Dong, and C. Wu. "Depth measurement using single camera with fixed camera parameters." IET Computer Vision 6, no. 1 (2012): 29. http://dx.doi.org/10.1049/iet-cvi.2010.0017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fang, Bin, Wu Sheng Chou, Xiao Qi Guo, and Xin Ma. "The Design of an Miniature Underwater Robot for Hazardous Environment." Applied Mechanics and Materials 347-350 (August 2013): 711–14. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.711.

Full text
Abstract:
An miniature underwater robotic system for hazardous environment has been developed. The system consists of an underwater robot, a robot control station and a cameral control station. The underwater robot is installed two cameras for inspection, one is a camera of radiation resistant with two-freedom PTZ in the front of the robot, and the other is a fixed camera in the back of the robot. A miniature manipulator is equipped under the fore-camera to catch the small parts like bolts and nuts in the pools. The movement of the underwater robot is controlled by the master control station and the cameral control station controls the rotation and focus of the fore-camera. Besides, the underwater robot is equipped with the sensors, as MEMS inertial measurement unit, magnetometers, side scan sonar, water-depth gauges, which are integrated to determine the orientation and location of the robot. Meanwhile the navigation information is displayed in the virtual environment, which is modeled upon the real pools of the nuclear power plant. The underwater robotic system is easy to operate and will be applied to the hazardous environment like nuclear environment in future.
APA, Harvard, Vancouver, ISO, and other styles
22

Park, Hyun Jun, and Kwang Baek Kim. "Depth image correction for intel realsense depth camera." Indonesian Journal of Electrical Engineering and Computer Science 19, no. 2 (August 1, 2020): 1021. http://dx.doi.org/10.11591/ijeecs.v19.i2.pp1021-1027.

Full text
Abstract:
<p><span>Intel RealSense depth camera provides depth image using infrared projector and infrared camera. Using infrared radiation makes it possible to measure the depth with high accuracy, but the shadow of infrared radiation makes depth unmeasured regions. Intel RealSense SDK provides a postprocessing algorithm to correct it. However, this algorithm is not enough to be used and needs to be improved. Therefore, we propose a method to correct the depth image using image processing techniques. The proposed method corrects the depth using the adjacent depth information. Experimental results showed that the proposed method corrects the depth image more accurately than the Intel RealSense SDK.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
23

Lin, Yueh-Ling, Mao-Jiun J. Wang, Yao-Yang Tsai, and Jay Huang. "Feature Extraction from Depth Camera." Advanced Science Letters 9, no. 1 (April 30, 2012): 429–34. http://dx.doi.org/10.1166/asl.2012.2560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zeller, N., F. Quint, and U. Stilla. "NARROW FIELD-OF-VIEW VISUAL ODOMETRY BASED ON A FOCUSED PLENOPTIC CAMERA." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W4 (March 11, 2015): 285–92. http://dx.doi.org/10.5194/isprsannals-ii-3-w4-285-2015.

Full text
Abstract:
In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth data gained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic camera. Our algorithm uses the depth data and the totally focused images supplied by the plenoptic camera to run a real-time semi-dense direct SLAM algorithm. Based on this combined approach, the scale ambiguity of a monocular SLAM system can be overcome. Furthermore, the additional light-field information highly improves the tracking capabilities of the algorithm. Thus, visual odometry even for narrow field of view (FOV) cameras is possible. We show that not only tracking profits from the additional light-field information. By accumulating the depth information over multiple tracked images, also the depth accuracy of the focused plenoptic camera can be highly improved. This novel approach improves the depth error by one order of magnitude compared to the one received from a single light-field image.
APA, Harvard, Vancouver, ISO, and other styles
25

SARIZEYBEK, Ali Tezcan, and Ali Hakan ISIK. "Monocular Depth Estimation and Detection of Near Objects." Uluslararası Teknolojik Bilimler Dergisi 14, no. 3 (December 31, 2022): 124–31. http://dx.doi.org/10.55974/utbd.1177526.

Full text
Abstract:
The image obtained from the cameras is 2D, so we cannot know how far the object is on the image. In order to detect objects only at a certain distance in a camera system, we need to convert the 2D image into 3D. Depth estimation is used to estimate distances to objects. It is the perception of the 2D image as 3D. Although different methods are used to implement this, the method to be applied in this experiment is to detect depth perception with a single camera. After obtaining the depth map, the obtained image will be filtered by objects in the near distance, the distant image will be closed, a new image will be run with the object detection model and object detection will be performed. The desired result in this experiment is, for projects with a low budget, instead of using dual camera or LIDAR methods, it is to ensure that a robot can detect obstacles that will come in front of it with only one camera. As a result, 8 FPS was obtained by running two models on the embedded device, and the loss value was obtained as 0.342 in the inference test performed on the new image, where only close objects were taken after the depth estimation.
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Woosung, Tuan Luong, Yoonwoo Ha, Myeongyun Doh, Juan Fernando Medrano Yax, and Hyungpil Moon. "High-Fidelity Drone Simulation with Depth Camera Noise and Improved Air Drag Force Models." Applied Sciences 13, no. 19 (September 24, 2023): 10631. http://dx.doi.org/10.3390/app131910631.

Full text
Abstract:
Drone simulations offer a safe environment for collecting data and testing algorithms. However, the depth camera sensor in the simulation provides exact depth values without error, which can result in variations in algorithm behavior, especially in the case of SLAM, when transitioning to real-world environments. The aerodynamic model in the simulation also differs from reality, leading to larger errors in drag force calculations at high speeds. This disparity between simulation and real-world conditions poses challenges when attempting to transfer high-speed drone algorithms developed in the simulated environment to actual operational settings. In this paper, we propose a more realistic simulation by implementing a novel depth camera noise model and an improved aerodynamic drag force model. Through experimental validation, we demonstrate the suitability of our models for simulating real-depth cameras and air drag forces. Our depth camera noise model can replicate the values of a real depth camera sensor with a coefficient of determination (R2) value of 0.62, and our air drag force model improves accuracy by 51% compared to the Airsim simulation air drag force model in outdoor flying experiments at 10 m/s.
APA, Harvard, Vancouver, ISO, and other styles
27

Zeller, N., C. A. Noury, F. Quint, C. Teulière, U. Stilla, and M. Dhome. "METRIC CALIBRATION OF A FOCUSED PLENOPTIC CAMERA BASED ON A 3D CALIBRATION TARGET." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 6, 2016): 449–56. http://dx.doi.org/10.5194/isprsannals-iii-3-449-2016.

Full text
Abstract:
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
APA, Harvard, Vancouver, ISO, and other styles
28

Zeller, N., C. A. Noury, F. Quint, C. Teulière, U. Stilla, and M. Dhome. "METRIC CALIBRATION OF A FOCUSED PLENOPTIC CAMERA BASED ON A 3D CALIBRATION TARGET." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 6, 2016): 449–56. http://dx.doi.org/10.5194/isprs-annals-iii-3-449-2016.

Full text
Abstract:
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
APA, Harvard, Vancouver, ISO, and other styles
29

Sarker, M. M., T. A. Ali, A. Abdelfatah, S. Yehia, and A. Elaksher. "A COST-EFFECTIVE METHOD FOR CRACK DETECTION AND MEASUREMENT ON CONCRETE SURFACE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W8 (November 14, 2017): 237–41. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w8-237-2017.

Full text
Abstract:
Crack detection and measurement in the surface of concrete structures is currently carried out manually or through Non-Destructive Testing (NDT) such as imaging or scanning. The recent developments in depth (stereo) cameras have presented an opportunity for cost-effective, reliable crack detection and measurement. This study aimed at evaluating the feasibility of the new inexpensive depth camera (ZED) for crack detection and measurement. This depth camera with its lightweight and portable nature produces a 3D data file of the imaged surface. The ZED camera was utilized to image a concrete surface and the 3D file was processed to detect and analyse cracks. This article describes the outcome of the experiment carried out with the ZED camera as well as the processing tools used for crack detection and analysis. Crack properties that were also of interest were length, orientation, and width. The use of the ZED camera allowed for distinction between surface and concrete cracks. The ZED high-resolution capability and point cloud capture technology helped in generating a dense 3D data in low-lighting conditions. The results showed the ability of the ZED camera to capture the crack depth changes between surface (render) cracks, and crack that form in the concrete itself.
APA, Harvard, Vancouver, ISO, and other styles
30

Hoegner, L., A. Hanel, M. Weinmann, B. Jutzi, S. Hinz, and U. Stilla. "Towards people detection from fused time-of-flight and thermal infrared images." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3 (August 11, 2014): 121–26. http://dx.doi.org/10.5194/isprsarchives-xl-3-121-2014.

Full text
Abstract:
Obtaining accurate 3d descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3d data from another sensor is able to overcome most of the limitations in the 3d geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras of Time-of-Flight (TOF) cameras is suitable. As a TOF camera is an active sensor in the near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications is independent from external illumination or textures in the scene. This article is focused on the fusion of data acquired both with a time-of-flight (TOF) camera and a thermal infrared (TIR) camera. As the radiometric behaviour of many objects differs between the near infrared used by the TOF camera and the thermal infrared spectrum, a direct co-registration with feature points in both intensity images leads to a high number of outliers. A fully automatic workflow of the geometric calibration of both cameras and the relative orientation of the camera system with one calibration pattern usable for both spectral bands is presented. Based on the relative orientation, a fusion of the TOF depth image and the TIR image is used for scene segmentation and people detection. An adaptive histogram based depth level segmentation of the 3d point cloud is combined with a thermal intensity based segmentation. The feasibility of the proposed method is demonstrated in an experimental setup with different geometric and radiometric influences that show the benefit of the combination of TOF intensity and depth images and thermal infrared images.
APA, Harvard, Vancouver, ISO, and other styles
31

Kim, Hyung-Su, and Young-Hwan Han. "VFH+ based 3D Spatial Map using Depth Camera." Journal of Korean Institute of Information Technology 19, no. 7 (July 31, 2021): 35–42. http://dx.doi.org/10.14801/jkiit.2021.19.7.35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhao, Ruiyi, Yangshi Ge, Ye Duan, and Quanhong Jiang. "Large-field Gesture Tracking and Recognition for Augmented Reality Interaction." Journal of Physics: Conference Series 2560, no. 1 (August 1, 2023): 012016. http://dx.doi.org/10.1088/1742-6596/2560/1/012016.

Full text
Abstract:
Abstract In recent years, with the continuous development of computer vision and artificial intelligence technology, gesture recognition is widely used in many fields, such as virtual reality, augmented reality and so on. However, the traditional binocular camera architecture is limited by its limited field of view Angle and depth perception range. Fisheye camera is gradually applied in gesture recognition field because of its advantage of larger field of view Angle. Fisheye cameras offer a wider field of vision than previous binocular cameras, allowing for a greater range of gesture recognition. This gives fisheye cameras a distinct advantage in situations that require a wide field of view. However, because the imaging mode of fisheye camera is different from traditional camera, the image of fisheye camera has a certain degree of distortion, which makes the calculation of gesture recognition more complicated. Our goal is to design a distortion correction processing strategy suitable for fisheye cameras in order to extend the range of gesture recognition and achieve large field of view gesture recognition. Combined with binocular technology, we can use the acquired hand depth information to enrich the means of interaction. By taking advantage of the large viewing Angle of the fisheye camera to expand the range of gesture recognition, make it more extensive and accurate. This will help improve the real-time and precision of gesture recognition, which has important implications for artificial intelligence, virtual reality and augmented reality.
APA, Harvard, Vancouver, ISO, and other styles
33

Yoon, Soocheol, Ya-Shian Li-Baboud, Ann Virts, Roger Bostelman, and Mili Shah. "Feasibility of using depth cameras for evaluating human - exoskeleton interaction." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 66, no. 1 (September 2022): 1892–96. http://dx.doi.org/10.1177/1071181322661190.

Full text
Abstract:
With the increased use of exoskeletons in a variety of fields such as industry, military, and health care, there is a need for measurement standards to understand the effects of exoskeletons on human motion. Optical tracking systems (OTS) provide high accuracy human motion tracking, but are expensive, require markers, and constrain the tests to a specified area where the cameras can provide sufficient coverage. This study describes the feasibility of using lower cost, portable, markerless depth camera systems for measuring human and exoskeleton 3-dimensional (3D) joint positions and angles. A human performing a variety of industrial tasks while wearing three different exoskeletons was tracked by both an OTS with modified skeletal models and a depth camera body tracking system. A comparison of the acquired data was then used to facilitate discussions regarding the potential use of depth cameras for exoskeleton evaluation.
APA, Harvard, Vancouver, ISO, and other styles
34

Park, Jang-Sik. "Smoke Detection Based on RGB-Depth Camera in Interior." Journal of the Korea institute of electronic communication sciences 9, no. 2 (February 28, 2014): 155–60. http://dx.doi.org/10.13067/jkiecs.2014.9.2.155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Mingce, Wenda He, Dong Wei, Chai Hu, Jiashuo Shi, Xinyu Zhang, Haiwei Wang, and Changsheng Xie. "Depth-of-Field-Extended Plenoptic Camera Based on Tunable Multi-Focus Liquid-Crystal Microlens Array." Sensors 20, no. 15 (July 25, 2020): 4142. http://dx.doi.org/10.3390/s20154142.

Full text
Abstract:
Plenoptic cameras have received a wide range of research interest because it can record the 4D plenoptic function or radiance including the radiation power and ray direction. One of its important applications is digital refocusing, which can obtain 2D images focused at different depths. To achieve digital refocusing in a wide range, a large depth of field (DOF) is needed, but there are fundamental optical limitations to this. In this paper, we proposed a plenoptic camera with an extended DOF by integrating a main lens, a tunable multi-focus liquid-crystal microlens array (TMF-LCMLA), and a complementary metal oxide semiconductor (CMOS) sensor together. The TMF-LCMLA was fabricated by traditional photolithography and standard microelectronic techniques, and its optical characteristics including interference patterns, focal lengths, and point spread functions (PSFs) were experimentally analyzed. Experiments demonstrated that the proposed plenoptic camera has a wider range of digital refocusing compared to the plenoptic camera based on a conventional liquid-crystal microlens array (LCMLA) with only one corresponding focal length at a certain voltage, which is equivalent to the extension of DOF. In addition, it also has a 2D/3D switchable function, which is not available with conventional plenoptic cameras.
APA, Harvard, Vancouver, ISO, and other styles
36

Xu, De, and Qingbin Wang. "A new vision measurement method based on active object gazing." International Journal of Advanced Robotic Systems 14, no. 4 (July 1, 2017): 172988141771598. http://dx.doi.org/10.1177/1729881417715984.

Full text
Abstract:
A new vision measurement system is developed with two cameras. One is fixed in pose to serve as a monitor camera. It finds and tracks objects in image space. The other is actively rotated to track the object in Cartesian space, working as an active object-gazing camera. The intrinsic parameters of the monitor camera are calibrated. The view angle corresponding to the object is calculated from the object’s image coordinates and the camera’s intrinsic parameters. The rotation angle of the object-gazing camera is measured with an encoder. The object’s depth is computed with the rotation angle and the view angle. Then the object’s three-dimensional position is obtained with its depth and normalized imaging coordinates. The error analysis is provided to assess the measurement accuracy. The experimental results verify the effectiveness of the proposed vision system and measurement method.
APA, Harvard, Vancouver, ISO, and other styles
37

Tomida, Motomasa, and Kiyoshi Hoshino. "Visual-Servoing Control of Robot Hand with Estimation of Full Articulation of Human Hand." Key Engineering Materials 625 (August 2014): 728–35. http://dx.doi.org/10.4028/www.scientific.net/kem.625.728.

Full text
Abstract:
A depth sensor or depth camera is available at a reasonable cost in recent years. Due to the excessive dispersion of depth values outputted from the depth camera, however, changes in the pose cannot be directly employed for complicated hand pose estimation. The authors therefore propose a visual-servoing controlled robotic hand with RGB high-speed cameras. Two cameras have their own database in the system. Each data set has proportional information of each hand image and image features for matching, and joint angle data for output as estimated results. Once sequential hand images are recorded with two high-speed RGB cameras, the system first selects one database with bigger size of hand region in each recorded image. Second, a coarse screening is carried out according to the proportional information on the hand image which roughly corresponds to wrist rotation, or thumb or finger extension. Third, a detailed search is performed for similarity among the selected candidates. The estimated results are transmitted to a robot hand so that the same motions of an operator is reconstructed in the robot without time delay.
APA, Harvard, Vancouver, ISO, and other styles
38

Rodriguez, Julian Severiano. "A comparison of an RGB-D cameras performance and a stereo camera in relation to object recognition and spatial position determination." ELCVIA Electronic Letters on Computer Vision and Image Analysis 20, no. 1 (May 27, 2021): 16–27. http://dx.doi.org/10.5565/rev/elcvia.1238.

Full text
Abstract:
Results of using an RGB-D camera (Kinect sensor) and a stereo camera, separately, in order to determine the 3D real position of characteristic points of a predetermined object in a scene are presented. KAZE algorithm was used to make the recognition, that algorithm exploits the nonlinear scale space through nonlinear diffusion filtering; 3D coordinates of the centroid of a predetermined object were calculated employing the camera calibration information and the depth parameter provided by a Kinect sensor and a stereo camera. Experimental results show it is possible to get the required coordinates with both cameras in order to locate a robot, although a balance in the distance where the sensor is placed must be guaranteed: no fewer than 0.8 m from the object to guarantee the real depth information, it is due to Kinect operating range; 0.5 m to stereo camera, but it must not be 1 m away to have a suitable rate of object recognition, besides, Kinect sensor has more precision with distance measures regarding a stereo camera.
APA, Harvard, Vancouver, ISO, and other styles
39

Tran, Van Luan, and Huei-Yung Lin. "A Structured Light RGB-D Camera System for Accurate Depth Measurement." International Journal of Optics 2018 (November 1, 2018): 1–7. http://dx.doi.org/10.1155/2018/8659847.

Full text
Abstract:
The ability to reliably measure the depth of the object surface is very important in a range of high-value industries. With the development of 3D vision techniques, RGB-D cameras have been widely used to perform the 6D pose estimation of target objects for a robotic manipulator. Many applications require accurate shape measurements of the objects for 3D template matching. In this work, we develop an RGB-D camera based on the structured light technique with gray-code coding. The intrinsic and extrinsic parameters of the camera system are determined by a calibration process. 3D reconstruction of the object surface is based on the ray triangulation principle. We construct an RGB-D sensing system with an industrial camera and a digital light projector. In the experiments, real-world objects are used to test the feasibility of the proposed technique. The evaluation carried out using planar objects has demonstrated the accuracy of our RGB-D depth measurement system.
APA, Harvard, Vancouver, ISO, and other styles
40

Lei, Fung Chan, Chin Yi He, and Rong Ching Lo. "Using Optical Flow under Bird’s-Eye View Transform to Estimate the Height of Objects around a Vehicle." Applied Mechanics and Materials 130-134 (October 2011): 1839–45. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.1839.

Full text
Abstract:
The paper proposed a new method that can real-time estimates the height of objects with a single camera from a bird’s–eye view. Generally, it is impossible to obtain 3-D information, like the depth of objects, with a single lens camera without the additional information, such as the height and the tilt angle of the camera, are known in advance [1]. The disparity map of binocular cameras is usually employed to estimate depth. It is not suitable for vehicles to estimate the height (similar to depth estimating from a planar view) of objects from a bird’s-eye view due to the difficulties of installing and corresponding. Therefore, the optical flow to estimate the height of the object with one camera is proposed. There are two features under a dynamic bird’s–eye view of image. First, the optical flow value is proportional to the height of the object. Second, there is no perspective effect in each layer height of an image plane. Several experimental results are included to show the proposed method is feasible.
APA, Harvard, Vancouver, ISO, and other styles
41

Neupane, Chiranjivi, Anand Koirala, Zhenglin Wang, and Kerry Brian Walsh. "Evaluation of Depth Cameras for Use in Fruit Localization and Sizing: Finding a Successor to Kinect v2." Agronomy 11, no. 9 (September 5, 2021): 1780. http://dx.doi.org/10.3390/agronomy11091780.

Full text
Abstract:
Eight depth cameras varying in operational principle (stereoscopy: ZED, ZED2, OAK-D; IR active stereoscopy: Real Sense D435; time of flight (ToF): Real Sense L515, Kinect v2, Blaze 101, Azure Kinect) were compared in context of use for in-orchard fruit localization and sizing. For this application, a specification on bias-corrected root mean square error of 20 mm for a camera-to-fruit distance of 2 m and operation under sunlit field conditions was set. The ToF cameras achieved the measurement specification, with a recommendation for use of Blaze 101 or Azure Kinect made in terms of operation in sunlight and in orchard conditions. For a camera-to-fruit distance of 1.5 m in sunlight, the Azure Kinect measurement achieved an RMSE of 6 mm, a bias of 17 mm, an SD of 2 mm and a fill rate of 100% for depth values of a central 50 × 50 pixels group. To enable inter-study comparisons, it is recommended that future assessments of depth cameras for this application should include estimation of a bias-corrected RMSE and estimation of bias on estimated camera-to-fruit distances at 50 cm intervals to 3 m, under both artificial light and sunlight, with characterization of image distortion and estimation of fill rate.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Xin, Egor Bondarev, Sander R. Klomp, Joury Zimmerman, and Peter H. N. de With. "Semantic 3D Indoor Reconstruction with Stereo Camera Imaging." Electronic Imaging 2021, no. 18 (January 18, 2021): 105–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.18.3dia-105.

Full text
Abstract:
On-the-fly reconstruction of 3D indoor environments has recently become an important research field to provide situational awareness for first responders, like police and defence officers. The protocols do not allow deployment of active sensors (LiDAR, ToF, IR cameras) to prevent the danger of being exposed. Therefore, passive sensors, such as stereo cameras or moving mono sensors, are the only viable options for 3D reconstruction. At present, even the best portable stereo cameras provide an inaccurate estimation of depth images, caused by the small camera baseline. Reconstructing a complete scene from inaccurate depth images becomes then a challenging task. In this paper, we present a real-time ROS-based system for first responders that performs semantic 3D indoor reconstruction based purely on stereo camera imaging. The major components in the ROS system are depth estimation, semantic segmentation, SLAM and 3D point-cloud filtering. First, we improve the semantic segmentation by training the DeepLab V3+ model [9] with a filtered combination of several publicly available semantic segmentation datasets. Second, we propose and experiment with several noise filtering techniques on both depth images and generated point-clouds. Finally, we embed semantic information into the mapping procedure to achieve an accurate 3D floor plan. The obtained semantic reconstruction provides important clues on the inside structure of an unseen building which can be used for navigation.
APA, Harvard, Vancouver, ISO, and other styles
43

Hanel, A., L. Hoegner, and U. Stilla. "TOWARDS THE INFLUENCE OF A CAR WINDSHIELD ON DEPTH CALCULATION WITH A STEREO CAMERA SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (June 15, 2016): 461–68. http://dx.doi.org/10.5194/isprs-archives-xli-b5-461-2016.

Full text
Abstract:
Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.
APA, Harvard, Vancouver, ISO, and other styles
44

Hanel, A., L. Hoegner, and U. Stilla. "TOWARDS THE INFLUENCE OF A CAR WINDSHIELD ON DEPTH CALCULATION WITH A STEREO CAMERA SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5 (June 15, 2016): 461–68. http://dx.doi.org/10.5194/isprsarchives-xli-b5-461-2016.

Full text
Abstract:
Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.
APA, Harvard, Vancouver, ISO, and other styles
45

Kaichi, Tomoya, Tsubasa Maruyama, Mitsunori Tada, and Hideo Saito. "Resolving Position Ambiguity of IMU-Based Human Pose with a Single RGB Camera." Sensors 20, no. 19 (September 23, 2020): 5453. http://dx.doi.org/10.3390/s20195453.

Full text
Abstract:
Human motion capture (MoCap) plays a key role in healthcare and human–robot collaboration. Some researchers have combined orientation measurements from inertial measurement units (IMUs) and positional inference from cameras to reconstruct the 3D human motion. Their works utilize multiple cameras or depth sensors to localize the human in three dimensions. Such multiple cameras are not always available in our daily life, but just a single camera attached in a smart IP devices has recently been popular. Therefore, we present a 3D pose estimation approach from IMUs and a single camera. In order to resolve the depth ambiguity of the single camera configuration and localize the global position of the subject, we present a constraint which optimizes the foot-ground contact points. The timing and 3D positions of the ground contact are calculated from the acceleration of IMUs on foot and geometric transformation of foot position detected on image, respectively. Since the results of pose estimation is greatly affected by the failure of the detection, we design the image-based constraints to handle the outliers of positional estimates. We evaluated the performance of our approach on public 3D human pose dataset. The experiments demonstrated that the proposed constraints contributed to improve the accuracy of pose estimation in single and multiple camera setting.
APA, Harvard, Vancouver, ISO, and other styles
46

Han, Y. H. "Upper Extremity Rehabilitation using Depth Camera." Journal of Rehabilitation Welfare Engineering & Assistive Technology 14, no. 1 (February 29, 2020): 36–41. http://dx.doi.org/10.21288/resko.2020.14.1.36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ge, Kailin, Han Hu, Jianjiang Feng, and Jie Zhou. "Depth Estimation Using a Sliding Camera." IEEE Transactions on Image Processing 25, no. 2 (February 2016): 726–39. http://dx.doi.org/10.1109/tip.2015.2507984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Xiaowu, Bin Zhou, Feixiang Lu, Lin Wang, Lang Bi, and Ping Tan. "Garment modeling with a depth camera." ACM Transactions on Graphics 34, no. 6 (November 4, 2015): 1–12. http://dx.doi.org/10.1145/2816795.2818059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Holst, Gerhard. "What Bit Depth for my Camera?" Optik & Photonik 10, no. 1 (February 2015): 46–48. http://dx.doi.org/10.1002/opph.201500002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

JIANG, WEI, SHIGEKI SUGIMOTO, and MASATOSHI OKUTOMI. "PANORAMIC 3D RECONSTRUCTION USING STEREO MULTI-PERSPECTIVE PANORAMA." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 06 (September 2010): 867–96. http://dx.doi.org/10.1142/s0218001410008226.

Full text
Abstract:
In this paper, we present a novel approach to imaging a panoramic (360°) environment and computing its dense depth map. Our approach adopts a multi-baseline stereo strategy using a set of multi-perspective panoramas where large baseline lengths are available. We design two image acquisition rigs for capturing such multi-perspective panoramas. The first one is composed of two parallel stereo cameras. By rotating the rig about a vertical axis, we generate four multi-perspective panoramas by resampling the regular perspective images captured by the stereo cameras. Then a depth map is estimated from the four multi-perspective panoramas and an original perspective image using a multi-baseline matching technique with different types of epipolar constraints. The second one is composed of a single camera and two mirrors. By rotating the rig, we acquire a spatio-temporal volume that is made up of the sequential images captured by the camera. Then we estimate a depth map by extracting trajectories from the spatio-temporal volume by using a multi-baseline stereo technique by considering occlusions. We can consider both rotating rigs as a single rotating camera with a very large field of view (FOV), that offers a large baseline length in depth estimation. In addition, compared with a previous approach using two multi-perspective panoramas from a single rotating camera, our approach can reduce matching errors due to image noise, repeated patterns, and occlusions by multi-baseline stereo techniques. Experimental results using both synthetic and real images show that our approach produces high quality panoramic 3D reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography