Siga este enlace para ver otros tipos de publicaciones sobre el tema: Calibration of robot base with depth sensor.

Artículos de revistas sobre el tema "Calibration of robot base with depth sensor"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 25 mejores artículos de revistas para su investigación sobre el tema "Calibration of robot base with depth sensor".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Zhou, Liling, Yingzi Wang, Yunfei Liu, Haifeng Zhang, Shuaikang Zheng, Xudong Zou y Zhitian Li. "A Tightly-Coupled Positioning System of Online Calibrated RGB-D Camera and Wheel Odometry Based on SE(2) Plane Constraints". Electronics 10, n.º 8 (19 de abril de 2021): 970. http://dx.doi.org/10.3390/electronics10080970.

Texto completo
Resumen
The emergence of Automated Guided Vehicle (AGV) has greatly increased the efficiency of the transportation industry, which put forward the urgent requirement for the accuracy and ease of use of 2D planar motion robot positioning. Multi-sensor fusion positioning has gradually become an important technical route to improve overall efficiency when dealing with AGV positioning. As a sensor directly acquiring depth, the RGB-D camera has received extensive attention in indoor positioning in recent years, while wheel odometry is the sensor that comes with most two-dimensional planar motion robots, and its parameters will not change over time. Both the RGB-D camera and the wheel odometry are commonly used sensors for indoor robot positioning, but the existing research on the fusion of RGB-D and wheel odometry is limited based on classic filtering algorithms; few fusion solutions based on optimization algorithm of them are available at present. To ensure the practicability and greatly improve the accuracy of RGB-D and odometry fusion positioning scheme, this paper proposed a tightly-coupled positioning scheme of online calibrated RGB-D camera and wheel odometry based on SE(2) plane constraints. Experiments have proved that the angle accuracy of the extrinsic parameter in the calibration part is less than 0.5 degrees, and the displacement of the extrinsic parameter reaches the millimeter level. The field-test positioning accuracy of the positioning system we proposed having reached centimeter-level on the dataset without pre-calibration, which is better than ORB-SLAM2 relying solely on RGB-D cameras. The experimental results verify the excellent performance of the frame in positioning accuracy and ease of use and prove that it can be a potential promising technical solution in the field of two-dimensional AGV positioning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Li, Chunguang, Chongben Tao y Guodong Liu. "3D Visual SLAM Based on Multiple Iterative Closest Point". Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/943510.

Texto completo
Resumen
With the development of novel RGB-D visual sensors, data association has been a basic problem in 3D Visual Simultaneous Localization and Mapping (VSLAM). To solve the problem, a VSLAM algorithm based on Multiple Iterative Closest Point (MICP) is presented. By using both RGB and depth information obtained from RGB-D camera, 3D models of indoor environment can be reconstructed, which provide extensive knowledge for mobile robots to accomplish tasks such as VSLAM and Human-Robot Interaction. Due to the limited views of RGB-D camera, additional information about the camera pose is needed. In this paper, the motion of the RGB-D camera is estimated by a motion capture system after a calibration process. Based on the estimated pose, the MICP algorithm is used to improve the alignment. A Kinect mobile robot which is running Robot Operating System and the motion capture system has been used for experiments. Experiment results show that not only the proposed VSLAM algorithm achieved good accuracy and reliability, but also the 3D map can be generated in real time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Chi, Chen Tung, Shih Chien Yang y Yin Tien Wang. "Calibration of RGB-D Sensors for Robot SLAM". Applied Mechanics and Materials 479-480 (diciembre de 2013): 677–81. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.677.

Texto completo
Resumen
This paper presents a calibration procedure for a Kinect RGB-D sensor and its application to robot simultaneous localization and mapping(SLAM). The calibration procedure consists of two stages: in the first stage, the RGB image is aligned with the depth image by using the bilinear interpolation. The distorted RGB image is further corrected in the second stage. The calibrated RGB-D sensor is used as the sensing device for robot navigation in unknown environment. In SLAM tasks, the speeded-up robust features (SURF) are detected from the RGB image and used as landmarks in the environment map. The depth image could provide the stereo information of each landmark. Meanwhile, the robot estimates its own state and landmark locations by mean of the Extended Kalman filter (EKF). The EKF SLAM has been carried out in the paper and the experimental results showed that the Kinect sensors could provide the mobile robot reliable measurement information when navigating in unknown environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Gong, Chunhe, Jingxia Yuan y Jun Ni. "A Self-Calibration Method for Robotic Measurement System". Journal of Manufacturing Science and Engineering 122, n.º 1 (1 de junio de 1999): 174–81. http://dx.doi.org/10.1115/1.538916.

Texto completo
Resumen
Robot calibration plays an increasingly important role in manufacturing. For robot calibration on the manufacturing floor, it is desirable that the calibration technique be easy and convenient to implement. This paper presents a new self-calibration method to calibrate and compensate for robot system kinematic errors. Compared with the traditional calibration methods, this calibration method has several unique features. First, it is not necessary to apply an external measurement system to measure the robot end-effector position for the purpose of kinematic identification since the robot measurement system has a sensor as its integral part. Second, this self-calibration is based on distance measurement rather than absolute position measurement for kinematic identification; therefore the calibration of the transformation from the world coordinate system to the robot base coordinate system, known as base calibration, is not necessary. These features not only greatly facilitate the robot system calibration, but also shorten the error propagation chain, therefore, increase the accuracy of parameter estimation. An integrated calibration system is designed to validate the effectiveness of this calibration method. Experimental results show that after calibration there is a significant improvement of robot accuracy over a typical robot workspace. [S1087-1357(00)01301-0]
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Cation, Sarah, Michele Oliver, Robert Joel Jack, James P. Dickey y Natasha Lee Shee. "Whole-Body Vibration Sensor Calibration Using a Six-Degree of Freedom Robot". Advances in Acoustics and Vibration 2011 (12 de mayo de 2011): 1–7. http://dx.doi.org/10.1155/2011/276898.

Texto completo
Resumen
Exposure to whole-body vibration (WBV) is associated with a wide variety of health disorders and as a result WBV levels are frequently assessed. Literature outlining WBV accelerations rarely address the calibration techniques and procedures used for WBV sensors to any depth, nor are any detailed information provided regarding such procedures or sensor calibration ranges. The purpose of this paper is to describe a calibration method for a 6 DOF transducer using a hexapod robot. Also described is a separate motion capture technique used to verify the calibration for acceleration values obtained which were outside the robot calibration range in order to include an acceptable calibration range for WBV environments. The sensor calibrated in this study used linear (Y=mX) calibration equations resulting in r2 values greater than 0.97 for maximum and minimum acceleration amplitudes of up to ±8 m/s2 and maximum and minimum velocity amplitudes up to ±100°/s. The motion capture technique verified that the translational calibrations held for accelerations up to ±4 g. Thus, the calibration procedures were shown to calibrate the sensor through the expected range for 6-DOF WBV field measurements for off-road vehicles even when subjected to shocks as a result of high speed travel over rough terrain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wang, Zhanxi, Jing Bai, Xiaoyu Zhang, Xiansheng Qin, Xiaoqun Tan y Yali Zhao. "Base Detection Research of Drilling Robot System by Using Visual Inspection". Journal of Robotics 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/8767531.

Texto completo
Resumen
This paper expounds the principle and method of calibration and base detection by using the visual measurement system for detection and correction of installation error between workpiece and the robot drilling system. This includes the use of Cognex Insight 5403 high precision industrial camera, a light source, and the KEYENCE coaxial IL-300 laser displacement sensor. The three-base holes method and two-base holes method are proposed to analyze the transfer relation between the basic coordinate system of the actual hole drilling robot and the basic coordinate system of the theoretical hole drilling robot. The corresponding vision coordinates calibration and the base detection experiments are examined and the data indicate that the result of base detection is close to the correct value.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Idrobo-Pizo, Gerardo Antonio, José Maurício S. T. Motta y Renato Coral Sampaio. "A Calibration Method for a Laser Triangulation Scanner Mounted on a Robot Arm for Surface Mapping". Sensors 19, n.º 8 (14 de abril de 2019): 1783. http://dx.doi.org/10.3390/s19081783.

Texto completo
Resumen
This paper presents and discusses a method to calibrate a specially built laser triangulation sensor to scan and map the surface of hydraulic turbine blades and to assign 3D coordinates to a dedicated robot to repair, by welding in layers, the damage on blades eroded by cavitation pitting and/or cracks produced by cyclic loading. Due to the large nonlinearities present in a camera and laser diodes, large range distances become difficult to measure with high precision. Aiming to improve the precision and accuracy of the range measurement sensor based on laser triangulation, a calibration model is proposed that involves the parameters of the camera, lens, laser positions, and sensor position on the robot arm related to the robot base to find the best accuracy in the distance range of the application. The developed sensor is composed of a CMOS camera and two laser diodes that project light lines onto the blade surface and needs image processing to find the 3D coordinates. The distances vary from 250 to 650 mm and the accuracy obtained within the distance range is below 1 mm. The calibration process needs a previous camera calibration and special calibration boards to calculate the correct distance between the laser diodes and the camera. The sensor position fixed on the robot arm is found by moving the robot to selected positions. The experimental procedures show the success of the calibration scheme.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Fu, Jinsheng, Yabin Ding, Tian Huang y Xianping Liu. "Hand-eye calibration method with a three-dimensional-vision sensor considering the rotation parameters of the robot pose". International Journal of Advanced Robotic Systems 17, n.º 6 (1 de noviembre de 2020): 172988142097729. http://dx.doi.org/10.1177/1729881420977296.

Texto completo
Resumen
Hand-eye calibration is a fundamental step for a robot equipped with vision systems. However, this problem usually interacts with robot calibration because robot geometric parameters are not very precise. In this article, a new calibration method considering the rotation parameters of the robot pose is proposed. First, a constraint least square model is established assuming that each spherical center measurement of standard ball is equal in the robot base frame, which provides an initial solution. To further improve the solution accuracy, a nonlinear calibration model in the sensor frame is established. Since it can reduce one error accumulation process, a more accurate reference point can be used for optimization. Then, the rotation parameters of the robot pose whose slight errors cause large disturbance to the solution are selected by analyzing the coefficient matrices of the error items. Finally, the hand-eye transformation parameters are refined together with the rotation parameters in the nonlinear optimization solution. Some comparative simulations are performed between the modified least square method, constrained least square method, and the proposed method. The experiments are conducted on a 5-axis hybrid robot named TriMule to demonstrate the superior accuracy of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Gattringer, Hubert, Andreas Müller y Philip Hoermandinger. "Design and Calibration of Robot Base Force/Torque Sensors and Their Application to Non-Collocated Admittance Control for Automated Tool Changing". Sensors 21, n.º 9 (21 de abril de 2021): 2895. http://dx.doi.org/10.3390/s21092895.

Texto completo
Resumen
Robotic manipulators physically interacting with their environment must be able to measure contact forces/torques. The standard approach to this end is attaching force/torque sensors directly at the end-effector (EE). This provides accurate measurements, but at a significant cost. Indirect measurement of the EE-loads by means of torque sensors at the actuated joint of a robot is an alternative, in particular for series-elastic actuators, but requires dedicated robot designs and significantly increases costs. In this paper, two alternative sensor concept for indirect measurement of EE-loads are presented. Both sensors are located at the robot base. The first sensor design involves three load cells on which the robot is mounted. The second concept consists of a steel plate with four spokes, at which it is suspended. At each spoke, strain gauges are attached to measure the local deformation, which is related to the load at the sensor plate (resembling the main principle of a force/torque sensor). Inferring the EE-load from the so determined base wrench necessitates a dynamic model of the robot, which accounts for the static as well as dynamic loads. A prototype implementation of both concepts is reported. Special attention is given to the model-based calibration, which is crucial for these indirect measurement concepts. Experimental results are shown when the novel sensors are employed for a tool changing task, which to some extend resembles the well-known peg-in-the-hole problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Aalerud, Atle, Joacim Dybedal y Geir Hovland. "Automatic Calibration of an Industrial RGB-D Camera Network Using Retroreflective Fiducial Markers". Sensors 19, n.º 7 (31 de marzo de 2019): 1561. http://dx.doi.org/10.3390/s19071561.

Texto completo
Resumen
This paper describes a non-invasive, automatic, and robust method for calibrating a scalable RGB-D sensor network based on retroreflective ArUco markers and the iterative closest point (ICP) scheme. We demonstrate the system by calibrating a sensor network comprised of six sensor nodes positioned in a relatively large industrial robot cell with an approximate size of 10 m × 10 m × 4 m. Here, the automatic calibration achieved an average Euclidean error of 3 cm at distances up to 9.45 m. To achieve robustness, we apply several innovative techniques: Firstly, we mitigate the ambiguity problem that occurs when detecting a marker at long range or low resolution by comparing the camera projection with depth data. Secondly, we use retroreflective fiducial markers in the RGB-D calibration for improved accuracy and detectability. Finally, the repeating ICP refinement uses an exact region of interest such that we employ the precise depth measurements of the retroreflective surfaces only. The complete calibration software and a recorded dataset are publically available and open source.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Andrade Chavez, Francisco Javier, Silvio Traversaro y Daniele Pucci. "Six-Axis Force Torque Sensor Model-Based In Situ Calibration Method and Its Impact in Floating-Based Robot Dynamic Performance". Sensors 19, n.º 24 (13 de diciembre de 2019): 5521. http://dx.doi.org/10.3390/s19245521.

Texto completo
Resumen
A crucial part of dynamic motions is the interaction with other objects or the environment. Floating base robots have yet to perform these motions repeatably and reliably. Force torque sensors are able to provide the full description of a contact. Despite that, their use beyond a simple threshold logic is not widespread in floating base robots. Force torque sensors might change performance when mounted, which is why in situ calibration methods can improve the performance of robots by ensuring better force torque measurements. The Model-Based in situ calibration method with temperature compensation has shown promising results in improving FT sensor measurements. There are two main goals for this paper. The first is to facilitate the use and understanding of the method by providing guidelines that show their usefulness through experimental results. Then the impact of having better FT measurements with no temperature drift are demonstrated by proving that the offset estimated with this method is still useful days and even a month from the time of estimation. The effect of this is showcased by comparing the sensor response with different offsets simultaneously during real robot experiments. Furthermore, quantitative results of the improvement in dynamic behaviors due to the in situ calibration are shown. Finally, we show how using better FT measurements as feedback in low and high level controllers can impact the performance of floating base robots during dynamic motions. Experiments were performed on the floating base robot iCub.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Mulyanto, Agus, Rohmat Indra Borman, Purwono Prasetyawan y A. Sumarudin. "Implementation 2D Lidar and Camera for detection object and distance based on RoS". JOIV : International Journal on Informatics Visualization 4, n.º 4 (18 de diciembre de 2020): 231. http://dx.doi.org/10.30630/joiv.4.4.466.

Texto completo
Resumen
The advanced driver assistance systems (ADAS) are one of the issues to protecting people from vehicle collision. Collision warning system is a very important part of ADAS to protect people from the dangers of accidents caused by fatigue, drowsiness and other human errors. Multi-sensors has been widely used in ADAS for environment perception such as cameras, radar, and light detection and ranging (LiDAR). We propose the relative orientation and translation between the two sensors are things that must be considered in performing fusion. we discuss the real-time collision warning system using 2D LiDAR and Camera sensors for environment perception and estimate the distance (depth) and angle of obstacles. In this paper, we propose a fusion of two sensors that is camera and 2D LiDAR to get the distance and angle of an obstacle in front of the vehicle that implemented on Nvidia Jetson Nano using Robot Operating System (ROS). Hence, a calibration process between the camera and 2D LiDAR is required which will be presented in session III. After that, the integration and testing will be carried out using static and dynamic scenarios in the relevant environment. For fusion, we use the implementation of the conversion from degree to coordinate. Based on the experiment, we result obtained an average of 0.197 meters
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Rodriguez, Julian Severiano. "A comparison of an RGB-D cameras performance and a stereo camera in relation to object recognition and spatial position determination". ELCVIA Electronic Letters on Computer Vision and Image Analysis 20, n.º 1 (27 de mayo de 2021): 16–27. http://dx.doi.org/10.5565/rev/elcvia.1238.

Texto completo
Resumen
Results of using an RGB-D camera (Kinect sensor) and a stereo camera, separately, in order to determine the 3D real position of characteristic points of a predetermined object in a scene are presented. KAZE algorithm was used to make the recognition, that algorithm exploits the nonlinear scale space through nonlinear diffusion filtering; 3D coordinates of the centroid of a predetermined object were calculated employing the camera calibration information and the depth parameter provided by a Kinect sensor and a stereo camera. Experimental results show it is possible to get the required coordinates with both cameras in order to locate a robot, although a balance in the distance where the sensor is placed must be guaranteed: no fewer than 0.8 m from the object to guarantee the real depth information, it is due to Kinect operating range; 0.5 m to stereo camera, but it must not be 1 m away to have a suitable rate of object recognition, besides, Kinect sensor has more precision with distance measures regarding a stereo camera.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bai, Jing, Yuchang Zhang, Xiansheng Qin, Zhanxi Wang y Chen Zheng. "Hybrid calibration and detection approach for mobile robotic manufacturing systems". Industrial Robot: the international journal of robotics research and application 47, n.º 4 (11 de mayo de 2020): 511–19. http://dx.doi.org/10.1108/ir-09-2019-0194.

Texto completo
Resumen
Purpose The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks in mobile robotic manufacturing systems. Design/methodology/approach A hybrid visual detection approach that combines monocular vision and laser ranging is proposed based on an eye-in-hand vision system. The laser displacement sensor is adopted to achieve normal alignment for an arbitrary plane and obtain depth information. The monocular camera measures the two-dimensional image information. In addition, a robot hand-eye relationship calibration method is presented in this paper. Findings First, a hybrid visual detection approach for mobile robotic manufacturing systems is proposed. This detection approach is based on an eye-in-hand vision system consisting of one monocular camera and three laser displacement sensors and it can achieve normal alignment for an arbitrary plane and spatial positioning of the workpiece. Second, based on this vision system, a robot hand-eye relationship calibration method is presented and it was successfully applied to a mobile robotic manufacturing system designed by the authors’ team. As a result, the relationship between the workpiece coordinate system and the end-effector coordinate system could be established accurately. Practical implications This approach can quickly and accurately establish the relationship between the coordinate system of the workpiece and that of the end-effector. The normal alignment accuracy of the hand-eye vision system was less than 0.5° and the spatial positioning accuracy could reach 0.5 mm. Originality/value This approach can achieve normal alignment for arbitrary planes and spatial positioning of the workpiece and it can quickly establish the pose relationship between the workpiece and end-effector coordinate systems. Moreover, the proposed approach can significantly improve the work efficiency, flexibility and intelligence of mobile robotic manufacturing systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Lin, Huei-Yung, Yuan-Chi Chung y Ming-Liang Wang. "Self-Localization of Mobile Robots Using a Single Catadioptric Camera with Line Feature Extraction". Sensors 21, n.º 14 (9 de julio de 2021): 4719. http://dx.doi.org/10.3390/s21144719.

Texto completo
Resumen
This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Qu, Yufeng y Guanghua Zong. "Automated de-stacking using compact and low-cost robotized system". Industrial Robot: An International Journal 41, n.º 2 (11 de marzo de 2014): 176–89. http://dx.doi.org/10.1108/ir-06-2013-368.

Texto completo
Resumen
Purpose – This paper aims to introduce a compact and low-cost robotized system and corresponding processing method for automatically identifying and de-stacking circulation boxes under natural stacking status. Design/methodology/approach – The whole system is composed of an industrial robot, a laser scanner and a computer. Automated de-stacking requires comprehensive and accurate status information of each box. To achieve this goal, the robot carries the laser scanner to perform linear scanning to describe a full depth image for the whole working area. Gaussian filter is applied to the image histogram to suppress the undesired noise. Draining and flooding process derived from classic algorithm identifies each box region from an intensity image. After parameters calculation and calibration, the grasping strategy is estimated and transferred to the robot to finish the de-stacking task. Findings – Currently, without pre-defined stack status, there is still manual operated alignment in stacking process in order to enable automatic de-stacking using robot. Complicated multi-sensor system such as video cameras can recognize the stack status but also brings high-cost and poor adaptability. It is meaningful to research on the efficient and low-cost measurement system as well as corresponding common data processing method. Research limitations/implications – This research presents an efficient solution to automated de-stacking task and only tests for three columns stack depending on the actual working condition. It still needs to be developed and tested for more situations. Originality/value – Utilizing only single laser scanner to measure box status instead of multi-sensor is novel and identification method in research can be suitable for different box types and sizes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Belter, Dominik, Przemysław Łabecki, Péter Fankhauser y Roland Siegwart. "RGB–D terrain perception and dense mapping for legged robots". International Journal of Applied Mathematics and Computer Science 26, n.º 1 (1 de marzo de 2016): 81–97. http://dx.doi.org/10.1515/amcs-2016-0006.

Texto completo
Resumen
Abstract This paper addresses the issues of unstructured terrain modeling for the purpose of navigation with legged robots. We present an improved elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities. We propose an extension of the elevation grid update mechanism by incorporating a formal treatment of the spatial uncertainty. Moreover, this paper presents uncertainty models for a structured light RGB-D sensor and a stereo vision camera used to produce a dense depth map. The model for the uncertainty of the stereo vision camera is based on uncertainty propagation from calibration, through undistortion and rectification algorithms, allowing calculation of the uncertainty of measured 3D point coordinates. The proposed uncertainty models were used for the construction of a terrain elevation map using the Videre Design STOC stereo vision camera and Kinect-like range sensors. We provide experimental verification of the proposed mapping method, and a comparison with another recently published terrain mapping method for walking robots.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Haag, S., D. Zontar, J. Schleupen, T. Müller y C. Brecher. "Chain of refined perception in self-optimizing assembly of micro-optical systems". Journal of Sensors and Sensor Systems 3, n.º 1 (14 de abril de 2014): 87–95. http://dx.doi.org/10.5194/jsss-3-87-2014.

Texto completo
Resumen
Abstract. Today, the assembly of laser systems requires a large share of manual operations due to its complexity regarding the optimal alignment of optics. Although the feasibility of automated alignment of laser optics has been shown in research labs, the development effort for the automation of assembly does not meet economic requirements – especially for low-volume laser production. This paper presents a model-based and sensor-integrated assembly execution approach for flexible assembly cells consisting of a macro-positioner covering a large workspace and a compact micromanipulator with camera attached to the positioner. In order to make full use of available models from computer-aided design (CAD) and optical simulation, sensor systems at different levels of accuracy are used for matching perceived information with model data. This approach is named "chain of refined perception", and it allows for automated planning of complex assembly tasks along all major phases of assembly such as collision-free path planning, part feeding, and active and passive alignment. The focus of the paper is put on the in-process image-based metrology and information extraction used for identifying and calibrating local coordinate systems as well as the exploitation of that information for a part feeding process for micro-optics. Results will be presented regarding the processes of automated calibration of the robot camera as well as the local coordinate systems of part feeding area and robot base.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Wang, Y., D. Ewert, T. Meisen, D. Schilberg y S. Jeschke. "Work area monitoring in dynamic environments using multiple auto-aligning 3-D sensors". Journal of Sensors and Sensor Systems 3, n.º 1 (3 de junio de 2014): 113–20. http://dx.doi.org/10.5194/jsss-3-113-2014.

Texto completo
Resumen
Abstract. Compared to current industry standards future production systems will be more flexible and robust and will adapt to unforeseen states and events. Industrial robots will interact with each other as well as with human coworkers. To be able to act in such a dynamic environment, each acting entity ideally needs complete knowledge of its surroundings, concerning working materials as well as other working entities. Therefore new monitoring methods providing complete coverage for complex and changing working areas are needed. While single 3-D sensors already provide detailed information within their field of view, complete coverage of a complete work area can only be achieved by relying on a multitude of these sensors. However, to provide useful information all data of each sensor must be aligned to each other and fused into an overall world picture. To be able to align the data correctly, the position and orientation of each sensor must be known with sufficient exactness. In a quickly changing dynamic environment, the positions of sensors are not fixed, but must be adjusted to maintain optimal coverage. Therefore, the sensors need to autonomously align themselves in real time. This can be achieved by adding defined markers with given geometrical patterns to the environment which can be used for calibration and localization of each sensor. As soon as two sensors detect the same markers, their relative position to each other can be calculated. Additional anchor markers at fixed positions serve as global reference points for the base coordinate system. In this paper we present a prototype for a self-aligning monitoring system based on a robot operating system (ROS) and Microsoft Kinect. This system is capable of autonomous real-time calibration relative to and with respect to a global coordinate system as well as to detect and track defined objects within the working area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Mohan Rayguru, Madan, Mohan Rajesh Elara, Balakrishnan Ramalingam, M. A. Viraj J. Muthugala y S. M. Bhagya P. Samarakoon. "A Path Tracking Strategy for Car Like Robots with Sensor Unpredictability and Measurement Errors". Sensors 20, n.º 11 (29 de mayo de 2020): 3077. http://dx.doi.org/10.3390/s20113077.

Texto completo
Resumen
This work is inspired by motion control of cleaning robots, operating in certain endogenous environments, and performing various tasks like door cleaning, wall sanitizing, etc. The base platform’s motion for these robots is generally similar to the motion of four-wheel cars. Most of the cleaning and maintenance tasks require detection, path planning, and control. The motion controller’s job is to ensure the robot follows the desired path or a set of points, pre-decided by the path planner. This control loop generally requires some feedback from the on-board sensors, and odometry modules, to compute the necessary velocity inputs for the wheels. As the sensors and odometry modules are prone to environmental noise, dead-reckoning errors, and calibration errors, the control input may not provide satisfactory performance in a closed-loop. This paper develops a robust-observer based sliding mode controller to fulfill the motion control task in the presence of incomplete state measurements and sensor inaccuracies. A robust intrinsic observer design is proposed to estimate the input matrix, which is used for dynamic feedback linearization. The resulting uncertain dynamics are then stabilized through a sliding mode controller. The proposed robust-observer based sliding mode technique assures asymptotic trajectory tracking in the presence of measurement uncertainties. Lyapunov based stability analysis is used to guarantee the convergence of the closed-loop system, and the proposed strategy is successfully validated through numerical simulations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Di Biase, Valeria, Ramon F. Hanssen y Sander E. Vos. "Sensitivity of Near-Infrared Permanent Laser Scanning Intensity for Retrieving Soil Moisture on a Coastal Beach: Calibration Procedure Using In Situ Data". Remote Sensing 13, n.º 9 (23 de abril de 2021): 1645. http://dx.doi.org/10.3390/rs13091645.

Texto completo
Resumen
Anthropogenic activities and climate change in coastal areas require continuous monitoring for a better understanding of environmental evolution and for the implementation of protection strategies. Surface moisture is one of the important drivers of coastal variability because it highly affects shoreward sand transport via aeolian processes. Several methods have been explored for measuring surface moisture at different spatiotemporal resolutions, and in recent years, light detection and ranging (LiDAR) technology has been investigated as a remote sensing tool for high-spatiotemporal-resolution moisture detection. The aim of the present study is the assessment of the performance of a permanent terrestrial laser scanner (TLS) with an original setting located on a high position and hourly scanning of a wide beach area stretching from a swash zone to the base of a dune in order to evaluate the soil moisture at a high spatiotemporal resolution. The reflectance of a Riegl-VZ2000 located in Noordwijk on the Dutch coast was used to assess a new calibration curve that allows the estimation of soil moisture. Three days of surveys were conducted to collect ground-truth soil moisture measurements with a time-domain reflectometry (TDR) sensor at 4 cm depth. Each in situ measurement was matched with the closest reflectance measurement provided by the TLS; the data were interpolated using a non-linear least squares method. A calibration curve that allowed the estimation of the soil moisture in the range of 0–30% was assessed; it presented a root-mean-square error (RMSE) of 4.3% and a coefficient of determination (R-square) of 0.86. As an innovative aspect, the calibration curve was tested under different circumstances, including weather conditions and tidal levels. Moreover, the TDR data collected during an independent survey were used to validate the assessed curve. The results show that the permanent TLS is a highly suitable technique for accurately evaluating the surface moisture variations on a wide sandy beach area with a high spatiotemporal resolution.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Li, Ruifeng, Jinlei Zhuang, Yunfeng Gao, Chuqing Cao y Ke Wang. "Design and calibration of a three-dimensional localization system for automatic measurement of long and thin tube based on camera and laser displacement sensor". Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 3 de diciembre de 2019, 095440621989230. http://dx.doi.org/10.1177/0954406219892302.

Texto completo
Resumen
An integrated system consisting of robot and laser scanner is considered as a promising alternative measurement device for three-dimensional tube which is long and thin. A key to achieving automatic measurement is to locate tube accurately and robustly, which is precondition for scanning path planning. Thus, a novel three-dimensional localization system consisting of camera and laser displacement senor is proposed. When fixed on robot, the camera can search tube quickly in large view and the laser displacement sensor obtains depth information of key points. Measurement principles are presented firstly, including four main steps: camera shooting, key points extraction, laser displacement sensor shooting, and coordinate calculation. Then, the location error influenced by parameters of camera model and laser displacement sensor is analyzed and illustrated by a specific camera and laser displacement sensor. Furthermore, a scan strategy is proposed for quite thin tube localization. Location error caused by perspective is also analyzed and a compensation method is proposed for decreasing the error. Additionally, a sensor transformation calibration method is presented for identifying the relationship between camera and laser displacement sensor, which is verified with high accuracy by calibration experiments. A contrast experiment of a cylinder bar localization shows that the maximum location error of the camera and laser displacement sensor is no more than 3.5 mm, which is only one-eighth of that of Kinect sensor. The mean and maximum location errors are within 2 mm and 3 mm when locating a car brake and fuel tube with about 1300 mm long, indicating high accuracy and good robustness of designed localization system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Al Khawli, Toufik, Muddasar Anwar, Dongming Gan y Shafiqul Islam. "Integrating laser profile sensor to an industrial robotic arm for improving quality inspection in manufacturing processes". Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 26 de julio de 2020, 095440622094255. http://dx.doi.org/10.1177/0954406220942552.

Texto completo
Resumen
This paper investigates the integration of laser profile sensor to an industrial robotic arm for automating the quality inspection in manufacturing processes that requires a manual labour intensive work. The aim was to register the measurements from a laser profile sensor mounted on a six degrees-of-freedom robot with respect to the robot base frame. The registration is based on a six degrees-of-freedom calibration, which is an essential step for several automated manufacturing processes that require high level of accuracy in tool positioning and alignment on one hand, and quality inspection systems that require flexibility and accurate measurements on the other hand. The investigation compromises of two calibration procedures namely the calibration using a sharp object and the planar constraints. The solution of the calibration procedures estimated from both iterative and optimization solvers is thoroughly discussed. By implementing a simulation platform that generates virtual data for the two procedures with additional levels of noise, the six-dimensional poses are estimated and compared to the ground truth. Finally, an experimental test using a laser profile from Acuity mounted on Mitsubishi RV-6SDL manipulator is presented to investigate the measurement accuracy with four estimated laser poses. The calibration procedure using a sharp object shows the most accurate simulation and experimental results under the effect of noise.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Kang, Mingu, Issam I. A. Qamhia, Erol Tutumluer, Won-Taek Hong y Jeb S. Tingle. "Bender Element Field Sensor for the Measurement of Pavement Base and Subbase Stiffness Characteristics". Transportation Research Record: Journal of the Transportation Research Board, 5 de marzo de 2021, 036119812199835. http://dx.doi.org/10.1177/0361198121998350.

Texto completo
Resumen
Layer modulus values are important input parameters in mechanistic pavement design and evaluation methods. Direct measurement of the stiffness characteristics of pavement base/subbase has been a challenging task. Nondestructive testing methods that are commonly used based on surface deflection measurements not only require a backcalculation process, but also have limitations on measuring local stiffness within the layer. This paper presents the result of a recent research effort at the University of Illinois aimed to develop a new sensor for the direct measurement of the in-situ moduli of constructed unbound pavement layers. The new sensor employs bender element (BE) shear wave transducers embedded in a granular base/subbase to evaluate the layer modulus from shear wave velocity measured at any depth and any orientation. To provide appropriate protection for the BE sensor and its cable connections, a stainless-steel cable guide, a sensor protection module, and a protection cover for the sensor were designed and optimized. A laboratory calibration box containing sand-sized crushed aggregates was used in the development stage of the BE sensor design. The BE sensor results were also studied for a typical dense-graded base course aggregate commonly used in Illinois. Finally, the BE sensor was installed in a field trial in newly constructed airport pavement test sections, and its layer modulus measurements were compared with results estimated from Dynamic Cone Penetrometer testing. The new BE field sensor has proven to be a viable direct measurement technique in transportation geotechnics applications to monitor stiffness characteristics of pavement granular base/subbase layers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Siva, Sriram y Hao Zhang. "Robot perceptual adaptation to environment changes for long-term human teammate following". International Journal of Robotics Research, 2 de enero de 2020, 027836491989662. http://dx.doi.org/10.1177/0278364919896625.

Texto completo
Resumen
Perception is one of the several fundamental abilities required by robots, and it also poses significant challenges, especially in real-world field applications. Long-term autonomy introduces additional difficulties to robot perception, including short- and long-term changes of the robot operation environment (e.g., lighting changes). In this article, we propose an innovative human-inspired approach named robot perceptual adaptation (ROPA) that is able to calibrate perception according to the environment context, which enables perceptual adaptation in response to environmental variations. ROPA jointly performs feature learning, sensor fusion, and perception calibration under a unified regularized optimization framework. We also implement a new algorithm to solve the formulated optimization problem, which has a theoretical guarantee to converge to the optimal solution. In addition, we collect a large-scale dataset from physical robots in the field, called perceptual adaptation to environment changes (PEAC), with the aim to benchmark methods for robot adaptation to short-term and long-term, and fast and gradual lighting changes for human detection based upon different feature modalities extracted from color and depth sensors. Utilizing the PEAC dataset, we conduct extensive experiments in the application of human recognition and following in various scenarios to evaluate ROPA. Experimental results have validated that the ROPA approach obtains promising performance in terms of accuracy and efficiency, and effectively adapts robot perception to address short-term and long-term lighting changes in human detection and following applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía