Academic literature on the topic 'Calibration of robot base with depth sensor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Calibration of robot base with depth sensor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Calibration of robot base with depth sensor"

1

Zhou, Liling, Yingzi Wang, Yunfei Liu, Haifeng Zhang, Shuaikang Zheng, Xudong Zou, and Zhitian Li. "A Tightly-Coupled Positioning System of Online Calibrated RGB-D Camera and Wheel Odometry Based on SE(2) Plane Constraints." Electronics 10, no. 8 (April 19, 2021): 970. http://dx.doi.org/10.3390/electronics10080970.

Full text
Abstract:
The emergence of Automated Guided Vehicle (AGV) has greatly increased the efficiency of the transportation industry, which put forward the urgent requirement for the accuracy and ease of use of 2D planar motion robot positioning. Multi-sensor fusion positioning has gradually become an important technical route to improve overall efficiency when dealing with AGV positioning. As a sensor directly acquiring depth, the RGB-D camera has received extensive attention in indoor positioning in recent years, while wheel odometry is the sensor that comes with most two-dimensional planar motion robots, and its parameters will not change over time. Both the RGB-D camera and the wheel odometry are commonly used sensors for indoor robot positioning, but the existing research on the fusion of RGB-D and wheel odometry is limited based on classic filtering algorithms; few fusion solutions based on optimization algorithm of them are available at present. To ensure the practicability and greatly improve the accuracy of RGB-D and odometry fusion positioning scheme, this paper proposed a tightly-coupled positioning scheme of online calibrated RGB-D camera and wheel odometry based on SE(2) plane constraints. Experiments have proved that the angle accuracy of the extrinsic parameter in the calibration part is less than 0.5 degrees, and the displacement of the extrinsic parameter reaches the millimeter level. The field-test positioning accuracy of the positioning system we proposed having reached centimeter-level on the dataset without pre-calibration, which is better than ORB-SLAM2 relying solely on RGB-D cameras. The experimental results verify the excellent performance of the frame in positioning accuracy and ease of use and prove that it can be a potential promising technical solution in the field of two-dimensional AGV positioning.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Chunguang, Chongben Tao, and Guodong Liu. "3D Visual SLAM Based on Multiple Iterative Closest Point." Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/943510.

Full text
Abstract:
With the development of novel RGB-D visual sensors, data association has been a basic problem in 3D Visual Simultaneous Localization and Mapping (VSLAM). To solve the problem, a VSLAM algorithm based on Multiple Iterative Closest Point (MICP) is presented. By using both RGB and depth information obtained from RGB-D camera, 3D models of indoor environment can be reconstructed, which provide extensive knowledge for mobile robots to accomplish tasks such as VSLAM and Human-Robot Interaction. Due to the limited views of RGB-D camera, additional information about the camera pose is needed. In this paper, the motion of the RGB-D camera is estimated by a motion capture system after a calibration process. Based on the estimated pose, the MICP algorithm is used to improve the alignment. A Kinect mobile robot which is running Robot Operating System and the motion capture system has been used for experiments. Experiment results show that not only the proposed VSLAM algorithm achieved good accuracy and reliability, but also the 3D map can be generated in real time.
APA, Harvard, Vancouver, ISO, and other styles
3

Chi, Chen Tung, Shih Chien Yang, and Yin Tien Wang. "Calibration of RGB-D Sensors for Robot SLAM." Applied Mechanics and Materials 479-480 (December 2013): 677–81. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.677.

Full text
Abstract:
This paper presents a calibration procedure for a Kinect RGB-D sensor and its application to robot simultaneous localization and mapping(SLAM). The calibration procedure consists of two stages: in the first stage, the RGB image is aligned with the depth image by using the bilinear interpolation. The distorted RGB image is further corrected in the second stage. The calibrated RGB-D sensor is used as the sensing device for robot navigation in unknown environment. In SLAM tasks, the speeded-up robust features (SURF) are detected from the RGB image and used as landmarks in the environment map. The depth image could provide the stereo information of each landmark. Meanwhile, the robot estimates its own state and landmark locations by mean of the Extended Kalman filter (EKF). The EKF SLAM has been carried out in the paper and the experimental results showed that the Kinect sensors could provide the mobile robot reliable measurement information when navigating in unknown environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Gong, Chunhe, Jingxia Yuan, and Jun Ni. "A Self-Calibration Method for Robotic Measurement System." Journal of Manufacturing Science and Engineering 122, no. 1 (June 1, 1999): 174–81. http://dx.doi.org/10.1115/1.538916.

Full text
Abstract:
Robot calibration plays an increasingly important role in manufacturing. For robot calibration on the manufacturing floor, it is desirable that the calibration technique be easy and convenient to implement. This paper presents a new self-calibration method to calibrate and compensate for robot system kinematic errors. Compared with the traditional calibration methods, this calibration method has several unique features. First, it is not necessary to apply an external measurement system to measure the robot end-effector position for the purpose of kinematic identification since the robot measurement system has a sensor as its integral part. Second, this self-calibration is based on distance measurement rather than absolute position measurement for kinematic identification; therefore the calibration of the transformation from the world coordinate system to the robot base coordinate system, known as base calibration, is not necessary. These features not only greatly facilitate the robot system calibration, but also shorten the error propagation chain, therefore, increase the accuracy of parameter estimation. An integrated calibration system is designed to validate the effectiveness of this calibration method. Experimental results show that after calibration there is a significant improvement of robot accuracy over a typical robot workspace. [S1087-1357(00)01301-0]
APA, Harvard, Vancouver, ISO, and other styles
5

Cation, Sarah, Michele Oliver, Robert Joel Jack, James P. Dickey, and Natasha Lee Shee. "Whole-Body Vibration Sensor Calibration Using a Six-Degree of Freedom Robot." Advances in Acoustics and Vibration 2011 (May 12, 2011): 1–7. http://dx.doi.org/10.1155/2011/276898.

Full text
Abstract:
Exposure to whole-body vibration (WBV) is associated with a wide variety of health disorders and as a result WBV levels are frequently assessed. Literature outlining WBV accelerations rarely address the calibration techniques and procedures used for WBV sensors to any depth, nor are any detailed information provided regarding such procedures or sensor calibration ranges. The purpose of this paper is to describe a calibration method for a 6 DOF transducer using a hexapod robot. Also described is a separate motion capture technique used to verify the calibration for acceleration values obtained which were outside the robot calibration range in order to include an acceptable calibration range for WBV environments. The sensor calibrated in this study used linear (Y=mX) calibration equations resulting in r2 values greater than 0.97 for maximum and minimum acceleration amplitudes of up to ±8 m/s2 and maximum and minimum velocity amplitudes up to ±100°/s. The motion capture technique verified that the translational calibrations held for accelerations up to ±4 g. Thus, the calibration procedures were shown to calibrate the sensor through the expected range for 6-DOF WBV field measurements for off-road vehicles even when subjected to shocks as a result of high speed travel over rough terrain.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Zhanxi, Jing Bai, Xiaoyu Zhang, Xiansheng Qin, Xiaoqun Tan, and Yali Zhao. "Base Detection Research of Drilling Robot System by Using Visual Inspection." Journal of Robotics 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/8767531.

Full text
Abstract:
This paper expounds the principle and method of calibration and base detection by using the visual measurement system for detection and correction of installation error between workpiece and the robot drilling system. This includes the use of Cognex Insight 5403 high precision industrial camera, a light source, and the KEYENCE coaxial IL-300 laser displacement sensor. The three-base holes method and two-base holes method are proposed to analyze the transfer relation between the basic coordinate system of the actual hole drilling robot and the basic coordinate system of the theoretical hole drilling robot. The corresponding vision coordinates calibration and the base detection experiments are examined and the data indicate that the result of base detection is close to the correct value.
APA, Harvard, Vancouver, ISO, and other styles
7

Idrobo-Pizo, Gerardo Antonio, José Maurício S. T. Motta, and Renato Coral Sampaio. "A Calibration Method for a Laser Triangulation Scanner Mounted on a Robot Arm for Surface Mapping." Sensors 19, no. 8 (April 14, 2019): 1783. http://dx.doi.org/10.3390/s19081783.

Full text
Abstract:
This paper presents and discusses a method to calibrate a specially built laser triangulation sensor to scan and map the surface of hydraulic turbine blades and to assign 3D coordinates to a dedicated robot to repair, by welding in layers, the damage on blades eroded by cavitation pitting and/or cracks produced by cyclic loading. Due to the large nonlinearities present in a camera and laser diodes, large range distances become difficult to measure with high precision. Aiming to improve the precision and accuracy of the range measurement sensor based on laser triangulation, a calibration model is proposed that involves the parameters of the camera, lens, laser positions, and sensor position on the robot arm related to the robot base to find the best accuracy in the distance range of the application. The developed sensor is composed of a CMOS camera and two laser diodes that project light lines onto the blade surface and needs image processing to find the 3D coordinates. The distances vary from 250 to 650 mm and the accuracy obtained within the distance range is below 1 mm. The calibration process needs a previous camera calibration and special calibration boards to calculate the correct distance between the laser diodes and the camera. The sensor position fixed on the robot arm is found by moving the robot to selected positions. The experimental procedures show the success of the calibration scheme.
APA, Harvard, Vancouver, ISO, and other styles
8

Fu, Jinsheng, Yabin Ding, Tian Huang, and Xianping Liu. "Hand-eye calibration method with a three-dimensional-vision sensor considering the rotation parameters of the robot pose." International Journal of Advanced Robotic Systems 17, no. 6 (November 1, 2020): 172988142097729. http://dx.doi.org/10.1177/1729881420977296.

Full text
Abstract:
Hand-eye calibration is a fundamental step for a robot equipped with vision systems. However, this problem usually interacts with robot calibration because robot geometric parameters are not very precise. In this article, a new calibration method considering the rotation parameters of the robot pose is proposed. First, a constraint least square model is established assuming that each spherical center measurement of standard ball is equal in the robot base frame, which provides an initial solution. To further improve the solution accuracy, a nonlinear calibration model in the sensor frame is established. Since it can reduce one error accumulation process, a more accurate reference point can be used for optimization. Then, the rotation parameters of the robot pose whose slight errors cause large disturbance to the solution are selected by analyzing the coefficient matrices of the error items. Finally, the hand-eye transformation parameters are refined together with the rotation parameters in the nonlinear optimization solution. Some comparative simulations are performed between the modified least square method, constrained least square method, and the proposed method. The experiments are conducted on a 5-axis hybrid robot named TriMule to demonstrate the superior accuracy of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

Gattringer, Hubert, Andreas Müller, and Philip Hoermandinger. "Design and Calibration of Robot Base Force/Torque Sensors and Their Application to Non-Collocated Admittance Control for Automated Tool Changing." Sensors 21, no. 9 (April 21, 2021): 2895. http://dx.doi.org/10.3390/s21092895.

Full text
Abstract:
Robotic manipulators physically interacting with their environment must be able to measure contact forces/torques. The standard approach to this end is attaching force/torque sensors directly at the end-effector (EE). This provides accurate measurements, but at a significant cost. Indirect measurement of the EE-loads by means of torque sensors at the actuated joint of a robot is an alternative, in particular for series-elastic actuators, but requires dedicated robot designs and significantly increases costs. In this paper, two alternative sensor concept for indirect measurement of EE-loads are presented. Both sensors are located at the robot base. The first sensor design involves three load cells on which the robot is mounted. The second concept consists of a steel plate with four spokes, at which it is suspended. At each spoke, strain gauges are attached to measure the local deformation, which is related to the load at the sensor plate (resembling the main principle of a force/torque sensor). Inferring the EE-load from the so determined base wrench necessitates a dynamic model of the robot, which accounts for the static as well as dynamic loads. A prototype implementation of both concepts is reported. Special attention is given to the model-based calibration, which is crucial for these indirect measurement concepts. Experimental results are shown when the novel sensors are employed for a tool changing task, which to some extend resembles the well-known peg-in-the-hole problem.
APA, Harvard, Vancouver, ISO, and other styles
10

Aalerud, Atle, Joacim Dybedal, and Geir Hovland. "Automatic Calibration of an Industrial RGB-D Camera Network Using Retroreflective Fiducial Markers." Sensors 19, no. 7 (March 31, 2019): 1561. http://dx.doi.org/10.3390/s19071561.

Full text
Abstract:
This paper describes a non-invasive, automatic, and robust method for calibrating a scalable RGB-D sensor network based on retroreflective ArUco markers and the iterative closest point (ICP) scheme. We demonstrate the system by calibrating a sensor network comprised of six sensor nodes positioned in a relatively large industrial robot cell with an approximate size of 10 m × 10 m × 4 m. Here, the automatic calibration achieved an average Euclidean error of 3 cm at distances up to 9.45 m. To achieve robustness, we apply several innovative techniques: Firstly, we mitigate the ambiguity problem that occurs when detecting a marker at long range or low resolution by comparing the camera projection with depth data. Secondly, we use retroreflective fiducial markers in the RGB-D calibration for improved accuracy and detectability. Finally, the repeating ICP refinement uses an exact region of interest such that we employ the precise depth measurements of the retroreflective surfaces only. The complete calibration software and a recorded dataset are publically available and open source.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Calibration of robot base with depth sensor"

1

Macknojia, Rizwan. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Full text
Abstract:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO, and other styles
2

Rizwan, Macknojia. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces." Thèse, 2013. http://hdl.handle.net/10393/23976.

Full text
Abstract:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Calibration of robot base with depth sensor"

1

Lamberth, Curt, and Jocelyne M. R. Hughes. "Physical Variables in Freshwater Ecosystems." In Freshwater Ecology and Conservation, 106–32. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198766384.003.0006.

Full text
Abstract:
We consider three categories of physical variables that can be measured for different freshwater ecosystems: (1) variables measured or described at the catchment or sub-catchment scale (e.g., bathymetry, depth, topography, geology); (2) those in or near to the water (e.g., temperature, turbidity, solar radiation); and (3) variables used to describe the substrate (e.g., particle size, mineral vs. peat). In this chapter we consider the practical aspects of undertaking a freshwater survey that includes measurement of physical variables; the approaches needed to undertake the survey; choosing a sampling strategy or protocol; practical tips on choice of measurement method or sensor, battery type, equipment calibration, resolution, accuracy, and links to literature providing further detail. The final section provides examples from a diversity of freshwaters where physical variables have been measured as part of an ecological survey, forming the evidence-base for management or conservation decisions.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Calibration of robot base with depth sensor"

1

Wurdemann, Helge A., Evangelos Georgiou, Lei Cui, and Jian S. Dai. "SLAM Using 3D Reconstruction via a Visual RGB and RGB-D Sensory Input." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-47735.

Full text
Abstract:
This paper investigates simultaneous localization and mapping (SLAM) problem by exploiting the Microsoft Kinect™ sensor array and an autonomous mobile robot capable of self-localization. The combination of them covers the major features of SLAM including mapping, sensing, locating, and modeling. The Kinect™ sensor array provides a dual camera output of RGB, using a CMOS camera, and RGB-D, using a depth camera. The sensors will be mounted on the KCLBOT, an autonomous nonholonomic two wheel maneuverable mobile robot. The mobile robot platform has the ability to self-localize and preform navigation maneuvers to traverse to set target points using intelligent processes. The target point for this operation is a fixed coordinate position, which will be the goal for the mobile robot to reach, taking into consideration the obstacles in the environment which will be represented in a 3D spatial model. Extracting the images from the sensor after a calibration routine, a 3D reconstruction of the traversable environment is produced for the mobile robot to navigate. Using the constructed 3D model the autonomous mobile robot follows a polynomial-based nonholonomic trajectory with obstacle avoidance. The experimental results demonstrate the cost effectiveness of this off the shelf sensor array. The results show the effectiveness to produce a 3D reconstruction of an environment and the feasibility of using the Microsoft Kinect™ sensor for mapping, sensing, locating, and modeling, that enables the implementation of SLAM on this type of platform.
APA, Harvard, Vancouver, ISO, and other styles
2

Burns, Brian, and Biswanath Samanta. "Human Identification for Human-Robot Interactions." In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-38496.

Full text
Abstract:
In co-robotics applications, the robots must identify human partners and recognize their status in dynamic interactions for enhanced acceptance and effectiveness as socially interactive agents. Using the data from depth cameras, people can be identified from a person’s skeletal information. This paper presents the implementation of a human identification algorithm using a depth camera (Carmine from PrimeSense), an open-source middleware (NITE from OpenNI) with the Java-based Processing language and an Arduino microcontroller. This implementation and communication sets a framework for future applications of human-robot interactions. Based on the movements of the individual in the depth sensor’s field of view, the program can be set to track a human skeleton or the closest pixel in the image. Joint locations in the tracked human can be isolated for specific usage by the program. Joints include the head, torso, shoulders, elbows, hands, knees and feet. Logic and calibration techniques were used to create systems such as a facial tracking pan and tilt servomotor mechanism. The control system presented here sets groundwork for future implementation into student built animatronic figures and mobile robot platforms such as Turtlebot.
APA, Harvard, Vancouver, ISO, and other styles
3

Midlam-Mohler, Shawn, Sai Rajagopalan, Kenneth P. Dudek, Yann Guezennec, and Steve Yurkovich. "Control Oriented Modeling of a Three Way Catalyst Coupled With Oxygen Sensors." In ASME 2008 Dynamic Systems and Control Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/dscc2008-2140.

Full text
Abstract:
Modeling of three-way catalyst behavior in stoichiometric engines is a tovpic with significant depth of research which encompasses complex kinetics based models through highly simplified control-oriented models. For model based control design, one must consider the behavior of the catalyst in conjunction with the feedback oxygen sensors. These sensors have well known influences from exhaust gas species due to interaction with the catalyst which, if ignored, can cause significant difficulties in modeling and control. These effects have often been addressed by calibrating and validating catalyst models under simplified conditions in order to minimize errors. In this work, the root cause of many of these errors is investigated and experimental evidence presented. Additionally, ARMA and Hammerstein models are used to find a model capable of predicting the post-catalyst oxygen sensor response over realistic validation data.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Meng Xi, Christian Scharfenberger, Alexander Wong, and David A. Clausi. "Simultaneous Scene Reconstruction and Auto-Calibration Using Constrained Iterative Closest Point for 3D Depth Sensor Array." In 2015 12th Conference on Computer and Robot Vision (CRV). IEEE, 2015. http://dx.doi.org/10.1109/crv.2015.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography