To see the other types of publications on this topic, follow the link: Active stereo vision.

Journal articles on the topic 'Active stereo vision'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Active stereo vision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Grosso, E., and M. Tistarelli. "Active/dynamic stereo vision." IEEE Transactions on Pattern Analysis and Machine Intelligence 17, no. 9 (1995): 868–79. http://dx.doi.org/10.1109/34.406652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jang, Mingyu, Hyunse Yoon, Seongmin Lee, Jiwoo Kang, and Sanghoon Lee. "A Comparison and Evaluation of Stereo Matching on Active Stereo Images." Sensors 22, no. 9 (2022): 3332. http://dx.doi.org/10.3390/s22093332.

Full text
Abstract:
The relationship between the disparity and depth information of corresponding pixels is inversely proportional. Thus, in order to accurately estimate depth from stereo vision, it is important to obtain accurate disparity maps, which encode the difference between horizontal coordinates of corresponding image points. Stereo vision can be classified as either passive or active. Active stereo vision generates pattern texture, which passive stereo vision does not have, on the image to fill the textureless regions. In passive stereo vision, many surveys have discovered that disparity accuracy is hea
APA, Harvard, Vancouver, ISO, and other styles
3

Gasteratos, Antonios. "Tele-Autonomous Active Stereo-Vision Head." International Journal of Optomechatronics 2, no. 2 (2008): 144–61. http://dx.doi.org/10.1080/15599610802081753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yexin Wang, Yexin Wang, Fuqiang Zhou Fuqiang Zhou, and Yi Cui Yi Cui. "Single-camera active stereo vision system using fiber bundles." Chinese Optics Letters 12, no. 10 (2014): 101301–4. http://dx.doi.org/10.3788/col201412.101301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Samson, Eric, Denis Laurendeau, Marc Parizeau, Sylvain Comtois, Jean-François Allan, and Clément Gosselin. "The Agile Stereo Pair for active vision." Machine Vision and Applications 17, no. 1 (2006): 32–50. http://dx.doi.org/10.1007/s00138-006-0013-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Feller, Michael, Jae-Sang Hyun, and Song Zhang. "Active Stereo Vision for Precise Autonomous Vehicle Control." Electronic Imaging 2020, no. 16 (2020): 258–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.16.avm-257.

Full text
Abstract:
This paper describes the development of a low-cost, lowpower, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a low cost, low power laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. A model test of the hitching problem was developed using an RC car and a target to represent a hitch. A control system is implemented
APA, Harvard, Vancouver, ISO, and other styles
7

Ko, Jung-Hwan. "Active Object Tracking System based on Stereo Vision." Journal of the Institute of Electronics and Information Engineers 53, no. 4 (2016): 159–66. http://dx.doi.org/10.5573/ieie.2016.53.4.159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Porta, J. M., J. J. Verbeek, and B. J. A. Kröse. "Active Appearance-Based Robot Localization Using Stereo Vision." Autonomous Robots 18, no. 1 (2005): 59–80. http://dx.doi.org/10.1023/b:auro.0000047287.00119.b6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yongchang Wang, Kai Liu, Qi Hao, Xianwang Wang, D. L. Lau, and L. G. Hassebrook. "Robust Active Stereo Vision Using Kullback-Leibler Divergence." IEEE Transactions on Pattern Analysis and Machine Intelligence 34, no. 3 (2012): 548–63. http://dx.doi.org/10.1109/tpami.2011.162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mohamed, Abdulla, Phil F. Culverhouse, Ricardo De Azambuja, Angelo Cangelosi, and Chenguang Yang. "Automating Active Stereo Vision Calibration Process with Cobots." IFAC-PapersOnLine 50, no. 2 (2017): 163–68. http://dx.doi.org/10.1016/j.ifacol.2017.12.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jung, Keonhwa, Seokjung Kim, Sungbin Im, Taehwan Choi, and Minho Chang. "A Photometric Stereo Using Re-Projected Images for Active Stereo Vision System." Applied Sciences 7, no. 10 (2017): 1058. http://dx.doi.org/10.3390/app7101058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yau, Wei-Yun, and Han Wang. "Active Visual Feedback Control of Robot Manipulator." Journal of Robotics and Mechatronics 9, no. 3 (1997): 231–38. http://dx.doi.org/10.20965/jrm.1997.p0231.

Full text
Abstract:
This paper describes an approach to control the robot manipulator using an active stereo camera system as the feedback mechanism. In the conventional system, increasing the precision of the hand-eye system inevitably reduces its operating range. It is also not robust to perturbations of the vision system which is commonly encountered in real-world applications. The proposed hand-eye system addresses these limitations and shortcomings. In this paper, the concept of pseudo image space which has three dimension is introduced. A relationship between the pseudo image space and the robot space is es
APA, Harvard, Vancouver, ISO, and other styles
13

Fan, Di, Yanyang Liu, Xiaopeng Chen, et al. "Eye Gaze Based 3D Triangulation for Robotic Bionic Eyes." Sensors 20, no. 18 (2020): 5271. http://dx.doi.org/10.3390/s20185271.

Full text
Abstract:
Three-dimensional (3D) triangulation based on active binocular vision has increasing amounts of applications in computer vision and robotics. An active binocular vision system with non-fixed cameras needs to calibrate the stereo extrinsic parameters online to perform 3D triangulation. However, the accuracy of stereo extrinsic parameters and disparity have a significant impact on 3D triangulation precision. We propose a novel eye gaze based 3D triangulation method that does not use stereo extrinsic parameters directly in order to reduce the impact. Instead, we drive both cameras to gaze at a 3D
APA, Harvard, Vancouver, ISO, and other styles
14

Hu, Shaopeng, Mingjun Jiang, Takeshi Takaki, and Idaku Ishii. "Real-Time Monocular Three-Dimensional Motion Tracking Using a Multithread Active Vision System." Journal of Robotics and Mechatronics 30, no. 3 (2018): 453–66. http://dx.doi.org/10.20965/jrm.2018.p0453.

Full text
Abstract:
In this study, we developed a monocular stereo tracking system to be used as a marker-based, three-dimensional (3-D) motion capture system. This system aims to localize dozens of markers on multiple moving objects in real time by switching five hundred different views in 1 s. The ultrafast mirror-drive active vision used in our catadioptric stereo tracking system can accelerate a series of operations for multithread gaze control with video shooting, computation, and actuation within 2 ms. By switching between five hundred different views in 1 s, with real-time video processing for marker extra
APA, Harvard, Vancouver, ISO, and other styles
15

Enescu, V., G. De Cubber, K. Cauwerts, et al. "Active stereo vision-based mobile robot navigation for person tracking." Integrated Computer-Aided Engineering 13, no. 3 (2006): 203–22. http://dx.doi.org/10.3233/ica-2006-13302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Xue, Ting, and Bin Wu. "Reparability measurement of vision sensor in active stereo visual system." Measurement 49 (March 2014): 275–82. http://dx.doi.org/10.1016/j.measurement.2013.12.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Barone, Sandro, Paolo Neri, Alessandro Paoli, and Armando Viviano Razionale. "Flexible calibration of a stereo vision system by active display." Procedia Manufacturing 38 (2019): 564–72. http://dx.doi.org/10.1016/j.promfg.2020.01.071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Krotkov, Eric, and Ruzena Bajcsy. "Active vision for reliable ranging: Cooperating focus, stereo, and vergence." International Journal of Computer Vision 11, no. 2 (1993): 187–203. http://dx.doi.org/10.1007/bf01469228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yau, Wei Yun, and Han Wang. "Fast Relative Depth Computation for an Active Stereo Vision System." Real-Time Imaging 5, no. 3 (1999): 189–202. http://dx.doi.org/10.1006/rtim.1997.0114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Yamashita, Atsushi, Toru Kaneko, Shinya Matsushita, Kenjiro T. Miura, and Suekichi Isogai. "Camera Calibration and 3-D Measurement with an Active Stereo Vision System for Handling Moving Objects." Journal of Robotics and Mechatronics 15, no. 3 (2003): 304–13. http://dx.doi.org/10.20965/jrm.2003.p0304.

Full text
Abstract:
In this paper, we propose a fast, easy camera calibration and 3-D measurement method with an active stereo vision system for handling moving objects whose geometric models are known. We use stereo cameras that change direction independently to follow moving objects. To gain extrinsic camera parameters in real time, a baseline stereo camera (parallel stereo camera) model and projective transformation of stereo images are used by considering epipolar constraints. To make use of 3-D measurement results for a moving object, the manipulator hand approaches the object. When the manipulator hand and
APA, Harvard, Vancouver, ISO, and other styles
21

LI, ZE-NIAN, and FRANK TONG. "RECIPROCAL-WEDGE TRANSFORM IN ACTIVE STEREO." International Journal of Pattern Recognition and Artificial Intelligence 13, no. 01 (1999): 25–48. http://dx.doi.org/10.1142/s0218001499000033.

Full text
Abstract:
The Reciprocal-Wedge Transform (RWT) facilitates space-variant image representation. In this paper a V-plane projection method is presented as a model for imaging using the RWT. It is then shown that space-variant sensing with this new RWT imaging model is suitable for fixation control in active stereo that exhibits vergence and versional eye movements and scanpath behaviors. A computational interpretation of stereo fusion in relation to disparity limit in space-variant imagery leads to the development of a computational model for binocular fixation. The vergence-version movement sequence is i
APA, Harvard, Vancouver, ISO, and other styles
22

DU, FENGLEI, and MICHAEL BRADY. "A FOUR DEGREE-OF-FREEDOM ROBOT HEAD FOR ACTIVE VISION." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 06 (1994): 1439–69. http://dx.doi.org/10.1142/s021800149400070x.

Full text
Abstract:
The design of a robot head for active computer vision tasks is described. The stereo head/eye platform uses a common elevation configuration and has four degree-of-freedom. The joints are driven by DC servo motors coupled with incremental optical encoders and backlash minimizing gearboxes. The details of mechanical design, head controller design, the architecture of the system, and the design criteria for various specifications are presented.
APA, Harvard, Vancouver, ISO, and other styles
23

Du, Qin Jun, Xue Yi Zhang, and Xing Guo Huang. "Modeling and Analysis of a Humanoid Robot Active Stereo Vision Platform." Applied Mechanics and Materials 55-57 (May 2011): 868–71. http://dx.doi.org/10.4028/www.scientific.net/amm.55-57.868.

Full text
Abstract:
Humanoid robot is not only expected to walk stably, but also is required to perform manipulation tasks autonomously in our work and living environment. This paper discusses the visual perception and the object manipulation based on visual servoing of a humanoid robot, an active robot vision model is built, and then the 3D location principle, the calibration method and precision of this model are analyzed. This active robot vision system with two DOF enlarges its visual field and the stereo is the most simple camera configuration for 3D position information.
APA, Harvard, Vancouver, ISO, and other styles
24

Charles, Priya, and A. V.Patil. "Non parametric methods of disparity computation." International Journal of Engineering & Technology 7, no. 2.6 (2018): 28. http://dx.doi.org/10.14419/ijet.v7i2.6.10062.

Full text
Abstract:
Disparity is inversely proportional to depth. Informationabout depth is a key factor in many real time applicationslikecomputer vision applications, medical diagnosis, model precision etc. Disparity is measured first in order to calculate the depth that suitsthe real world applications. There are two approaches viz., active and passive methods. Due to its cost effectiveness, passive approach is the most popular approach. In spite of this, the measures arelimited by its occlusion, more number of objects and texture areas. So, effective and efficient stereo depth estimation algorithms have taken
APA, Harvard, Vancouver, ISO, and other styles
25

Zoppi, Matteo, and Rezia Molfino. "ArmillEye: Flexible Platform for Underwater Stereo Vision." Journal of Mechanical Design 129, no. 8 (2006): 808–15. http://dx.doi.org/10.1115/1.2735338.

Full text
Abstract:
The paper describes ArmillEye, a 3-degree of freedom (DOF) flexible hybrid platform designed for agile underwater stereoptic vision. Effective telecontrol systems of remote operated vehicles require active and dexterous camera support in order to allow the operator to easily and promptly change the point of view, also improving the virtual reconstruction of the environment in difficult operative conditions (dirtiness, turbulence, and partial occlusion). The same concepts hold for visual servoing of autonomous underwater vehicles. ArmillEye was designed for this specific application; it is base
APA, Harvard, Vancouver, ISO, and other styles
26

TANG, CHENG-YUAN, ZEN CHEN, and YI-PING HUNG. "AUTOMATIC DETECTION AND TRACKING OF HUMAN HEADS USING AN ACTIVE STEREO VISION SYSTEM." International Journal of Pattern Recognition and Artificial Intelligence 14, no. 02 (2000): 137–66. http://dx.doi.org/10.1142/s0218001400000118.

Full text
Abstract:
A new head tracking algorithm for automatically detecting and tracking human heads in complex backgrounds is proposed. By using an elliptical model for the human head, our Maximum Likelihood (ML) head detector can reliably locate human heads in images having complex backgrounds and is relatively insensitive to illumination and rotation of the human heads. Our head detector consists of two channels: the horizontal and the vertical channels. Each channel is implemented by multiscale template matching. Using a hierarchical structure in implementing our head detector, the execution time for detect
APA, Harvard, Vancouver, ISO, and other styles
27

Tang Yiping, 汤一平, 鲁少辉 Lu Shaohui, 吴. 挺. Wu Ting, and 韩国栋 Han Guodong. "Pipe morphology defects inspection system with active stereo omnidirectional vision sensor." Infrared and Laser Engineering 45, no. 11 (2016): 1117005. http://dx.doi.org/10.3788/irla201645.1117005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Tang Yiping, 汤一平, 鲁少辉 Lu Shaohui, 吴. 挺. Wu Ting, and 韩国栋 Han Guodong. "Pipe morphology defects inspection system with active stereo omnidirectional vision sensor." Infrared and Laser Engineering 45, no. 11 (2016): 1117005. http://dx.doi.org/10.3788/irla20164511.1117005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Huber, Eric, and David Kortenkamp. "A behavior-based approach to active stereo vision for mobile robots." Engineering Applications of Artificial Intelligence 11, no. 2 (1998): 229–43. http://dx.doi.org/10.1016/s0952-1976(97)00078-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Okubo, Atsushi, Atsushi Nishikawa, and Fumio Miyazaki. "Selective acquisition of 3D structure with an active stereo vision system." Systems and Computers in Japan 30, no. 12 (1999): 1–15. http://dx.doi.org/10.1002/(sici)1520-684x(19991115)30:12<1::aid-scj1>3.0.co;2-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

XU, TINGTING, TIANGUANG ZHANG, KOLJA KÜHNLENZ, and MARTIN BUSS. "ATTENTIONAL OBJECT DETECTION WITH AN ACTIVE MULTI-FOCAL VISION SYSTEM." International Journal of Humanoid Robotics 07, no. 02 (2010): 223–43. http://dx.doi.org/10.1142/s0219843610002076.

Full text
Abstract:
A biologically inspired foveated attention system in an object detection scenario is proposed. Bottom-up attention uses wide-angle stereo camera data to select a sequence of fixation points. Successive snapshots of high foveal resolution using a telephoto camera enable highly accurate object recognition based on SIFT algorithm. Top-down information is incrementally estimated and integrated using a Kalman-filter, enabling parameter adaptation to changing environments due to robot locomotion. In the experimental evaluation, all the target objects were detected in different backgrounds. Significa
APA, Harvard, Vancouver, ISO, and other styles
32

Bi, Songlin, Menghao Wang, Jiaqi Zou, Yonggang Gu, Chao Zhai, and Ming Gong. "Dental Implant Navigation System Based on Trinocular Stereo Vision." Sensors 22, no. 7 (2022): 2571. http://dx.doi.org/10.3390/s22072571.

Full text
Abstract:
Traditional dental implant navigation systems (DINS) based on binocular stereo vision (BSV) have limitations, for example, weak anti-occlusion abilities, as well as problems with feature point mismatching. These shortcomings limit the operators’ operation scope, and the instruments may even cause damage to the adjacent important blood vessels, nerves, and other anatomical structures. Trinocular stereo vision (TSV) is introduced to DINS to improve the accuracy and safety of dental implants in this study. High positioning accuracy is provided by adding cameras. When one of the cameras is blocked
APA, Harvard, Vancouver, ISO, and other styles
33

Chung, Jae-Moon, and Tadashi Nagata. "Binocular vision planning with anthropomorphic features for grasping parts by robots." Robotica 14, no. 3 (1996): 269–79. http://dx.doi.org/10.1017/s0263574700019585.

Full text
Abstract:
SUMMARYPlanning of an active vision having anthropomorphic features, such as binocularity, foveas and gaze control, is proposed. The aim of the vision is to provide robots with the pose informaton of an adequate object to be grasped by the robots. For this, the paper describes a viewer-oriented fixation point frame and its calibration, active motion and gaze control of the vision, disparity filtering, zoom control, and estimation of the pose of a specific portion of a selected object. On the basis of the importance of the contour information and the scheme of stereo vision in recognizing objec
APA, Harvard, Vancouver, ISO, and other styles
34

Shibata, Masaaki, and Taiga Honma. "A Control Technique for 3D Object Tracking on Active Stereo Vision Robot." IEEJ Transactions on Electronics, Information and Systems 125, no. 3 (2005): 536–37. http://dx.doi.org/10.1541/ieejeiss.125.536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chichyang Chen and Y. F. Zheng. "Passive and active stereo vision for smooth surface detection of deformed plates." IEEE Transactions on Industrial Electronics 42, no. 3 (1995): 300–306. http://dx.doi.org/10.1109/41.382141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wallner, F., and R. Dillman. "Real-time map refinement by use of sonar and active stereo-vision." Robotics and Autonomous Systems 16, no. 1 (1995): 47–56. http://dx.doi.org/10.1016/0921-8890(95)00147-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nishikawa, Atsushi, Shinpei Ogawa, Noriaki Maru, and Fumio Miyazaki. "Reconstruction of object surfaces by using occlusion information from active stereo vision." Systems and Computers in Japan 28, no. 9 (1997): 86–97. http://dx.doi.org/10.1002/(sici)1520-684x(199708)28:9<86::aid-scj10>3.0.co;2-f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Deris, A., I. Trigonis, A. Aravanis, and E. K. Stathopoulou. "DEPTH CAMERAS ON UAVs: A FIRST APPROACH." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (February 23, 2017): 231–36. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-231-2017.

Full text
Abstract:
Accurate depth information retrieval of a scene is a field under investigation in the research areas of photogrammetry, computer vision and robotics. Various technologies, active, as well as passive, are used to serve this purpose such as laser scanning, photogrammetry and depth sensors, with the latter being a promising innovative approach for fast and accurate 3D object reconstruction using a broad variety of measuring principles including stereo vision, infrared light or laser beams. In this study we investigate the use of the newly designed Stereolab's ZED depth camera based on passive ste
APA, Harvard, Vancouver, ISO, and other styles
39

Grace, A. E., D. Pycock, H. T. Tillotson, and M. S. Snaith. "Active shape from stereo for highway inspection." Machine Vision and Applications 12, no. 1 (2000): 7–15. http://dx.doi.org/10.1007/s001380050119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Tao T., and Jianan Y. Qu. "Optical imaging for medical diagnosis based on active stereo vision and motion tracking." Optics Express 15, no. 16 (2007): 10421. http://dx.doi.org/10.1364/oe.15.010421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Das, S., and N. Ahuja. "Performance analysis of stereo, vergence, and focus as depth cues for active vision." IEEE Transactions on Pattern Analysis and Machine Intelligence 17, no. 12 (1995): 1213–19. http://dx.doi.org/10.1109/34.476513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Dipanda, A., S. Woo, F. Marzani, and J. M. Bilbault. "3-D shape reconstruction in an active stereo vision system using genetic algorithms." Pattern Recognition 36, no. 9 (2003): 2143–59. http://dx.doi.org/10.1016/s0031-3203(03)00049-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dankers, Andrew, Nick Barnes, and Alex Zelinsky. "MAP ZDF segmentation and tracking using active stereo vision: Hand tracking case study." Computer Vision and Image Understanding 108, no. 1-2 (2007): 74–86. http://dx.doi.org/10.1016/j.cviu.2006.10.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

PAU, L. F. "AN INTELLIGENT CAMERA FOR ACTIVE VISION." International Journal of Pattern Recognition and Artificial Intelligence 10, no. 01 (1996): 33–42. http://dx.doi.org/10.1142/s0218001496000049.

Full text
Abstract:
Much research is currently going on about the processing of one or two-camera imagery, possibly combined with other sensors and actuators, in view of achieving attentive vision, i.e. processing selectively some parts of a scene possibly with another resolution. Attentive vision in turn is an element of active vision where the outcome of the image processing triggers changes in the image acquisition geometry and/or of the environment. Almost all this research is assuming classical imaging, scanning and conversion geometries, such as raster based scanning and processing of several digitized outp
APA, Harvard, Vancouver, ISO, and other styles
45

YI, Ying Min, and Yu Hui. "Simultaneous Localization and Mapping with Identification of Landmarks Based on Monocular Vision." Advanced Materials Research 366 (October 2011): 90–94. http://dx.doi.org/10.4028/www.scientific.net/amr.366.90.

Full text
Abstract:
How to identify objects is a hot issue of robot simultaneous localization and mapping (SLAM) with monocular vision. In this paper, an algorithm of wheeled robot’s simultaneous localization and mapping with identification of landmarks based on monocular vision is proposed. In observation steps, identifying landmarks and locating position are performed by image processing and analyzing, which converts vision image projection of wheeled robots and geometrical relations of spatial objects into calculating robots’ relative landmarks distance and angle. The integral algorithm procedure follows the r
APA, Harvard, Vancouver, ISO, and other styles
46

Sumetheeprasit, Borwonpob, Ricardo Rosales Martinez, Hannibal Paul, and Kazuhiro Shimonomura. "Long-Range 3D Reconstruction Based on Flexible Configuration Stereo Vision Using Multiple Aerial Robots." Remote Sensing 16, no. 2 (2024): 234. http://dx.doi.org/10.3390/rs16020234.

Full text
Abstract:
Aerial robots, or unmanned aerial vehicles (UAVs), are widely used in 3D reconstruction tasks employing a wide range of sensors. In this work, we explore the use of wide baseline and non-parallel stereo vision for fast and movement-efficient long-range 3D reconstruction with multiple aerial robots. Each viewpoint of the stereo vision system is distributed on separate aerial robots, facilitating the adjustment of various parameters, including baseline length, configuration axis, and inward yaw tilt angle. Additionally, multiple aerial robots with different sets of parameters can be used simulta
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Xin, and Pieter Jonker. "An Advanced Active Vision System with Multimodal Visual Odometry Perception for Humanoid Robots." International Journal of Humanoid Robotics 14, no. 03 (2017): 1750006. http://dx.doi.org/10.1142/s0219843617500062.

Full text
Abstract:
Using active vision to perceive surroundings instead of just passively receiving information, humans develop the ability to explore unknown environments. Humanoid robot active vision research has already half a century history. It covers comprehensive research areas and plenty of studies have been done. Nowadays, the new trend is to use a stereo setup or a Kinect with neck movements to realize active vision. However, human perception is a combination of eye and neck movements. This paper presents an advanced active vision system that works in a similar way as human vision. The main contributio
APA, Harvard, Vancouver, ISO, and other styles
48

SHIBATA, Masaaki, and Taiga HONMA. "Visual Tracking Control for Static Pose and Dynamic Response on Active Stereo Vision Robot." Journal of the Japan Society for Precision Engineering, Contributed Papers 71, no. 8 (2005): 1036–40. http://dx.doi.org/10.2493/jspe.71.1036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Busboom, A., and R. J. Schalkoff. "Active stereo vision and direct surface parameter estimation: curve-to-curve image plane mappings." IEE Proceedings - Vision, Image, and Signal Processing 143, no. 2 (1996): 109. http://dx.doi.org/10.1049/ip-vis:19960162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

NAGAHAMA, Kotaro, Shota SHIRAYAMA, Ryusuke UEKI, Mitsuharu KOJIMA, Kei OKADA, and Masayuki INABA. "2P1-D19 Gaze Control to Human and Handling Objects for Humanoid's Stereo Active Vision." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2010 (2010): _2P1—D19_1—_2P1—D19_4. http://dx.doi.org/10.1299/jsmermd.2010._2p1-d19_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!