To see the other types of publications on this topic, follow the link: Camera motion estimation.

Journal articles on the topic 'Camera motion estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Camera motion estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guan, Banglei, Xiangyi Sun, Yang Shang, Xiaohu Zhang, and Manuel Hofer. "Multi-camera networks for motion parameter estimation of an aircraft." International Journal of Advanced Robotic Systems 14, no. 1 (January 1, 2017): 172988141769231. http://dx.doi.org/10.1177/1729881417692312.

Full text
Abstract:
A multi-camera network is proposed to estimate an aircraft’s motion parameters relative to the reference platform in large outdoor fields. Multiple cameras are arranged to cover the aircraft’s large-scale motion spaces by field stitching. A camera calibration method using dynamic control points created by a multirotor unmanned aerial vehicle is presented under the conditions that the field of view of the cameras is void. The relative deformation of the camera network caused by external environmental factors is measured and compensated using a combination of cameras and laser rangefinders. A series of field experiments have been carried out using a fixed-wing aircraft without artificial makers, and its accuracy is evaluated using an onboard Differential Global Positioning System. The experimental results show that the multi-camera network is precise, robust, and highly dynamic and can improve the aircraft’s landing accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Mansour, Mostafa, Pavel Davidson, Oleg Stepanov, and Robert Piché. "Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: A Computer Vision Approach." Remote Sensing 11, no. 17 (August 23, 2019): 1990. http://dx.doi.org/10.3390/rs11171990.

Full text
Abstract:
Binocular disparity and motion parallax are the most important cues for depth estimation in human and computer vision. Here, we present an experimental study to evaluate the accuracy of these two cues in depth estimation to stationary objects in a static environment. Depth estimation via binocular disparity is most commonly implemented using stereo vision, which uses images from two or more cameras to triangulate and estimate distances. We use a commercial stereo camera mounted on a wheeled robot to create a depth map of the environment. The sequence of images obtained by one of these two cameras as well as the camera motion parameters serve as the input to our motion parallax-based depth estimation algorithm. The measured camera motion parameters include translational and angular velocities. Reference distance to the tracked features is provided by a LiDAR. Overall, our results show that at short distances stereo vision is more accurate, but at large distances the combination of parallax and camera motion provide better depth estimation. Therefore, by combining the two cues, one obtains depth estimation with greater range than is possible using either cue individually.
APA, Harvard, Vancouver, ISO, and other styles
3

Holešovský, Ondřej, Radoslav Škoviera, Václav Hlaváč, and Roman Vítek. "Experimental Comparison between Event and Global Shutter Cameras." Sensors 21, no. 4 (February 6, 2021): 1137. http://dx.doi.org/10.3390/s21041137.

Full text
Abstract:
We compare event-cameras with fast (global shutter) frame-cameras experimentally, asking: “What is the application domain, in which an event-camera surpasses a fast frame-camera?” Surprisingly, finding the answer has been difficult. Our methodology was to test event- and frame-cameras on generic computer vision tasks where event-camera advantages should manifest. We used two methods: (1) a controlled, cheap, and easily reproducible experiment (observing a marker on a rotating disk at varying speeds); (2) selecting one challenging practical ballistic experiment (observing a flying bullet having a ground truth provided by an ultra-high-speed expensive frame-camera). The experimental results include sampling/detection rates and position estimation errors as functions of illuminance and motion speed; and the minimum pixel latency of two commercial state-of-the-art event-cameras (ATIS, DVS240). Event-cameras respond more slowly to positive than to negative large and sudden contrast changes. They outperformed a frame-camera in bandwidth efficiency in all our experiments. Both camera types provide comparable position estimation accuracy. The better event-camera was limited by pixel latency when tracking small objects, resulting in motion blur effects. Sensor bandwidth limited the event-camera in object recognition. However, future generations of event-cameras might alleviate bandwidth limitations.
APA, Harvard, Vancouver, ISO, and other styles
4

Ogawa, Shota, Kenichi Asami, and Mochimitsu Komori. "Design and Evaluation of Compact Real-time Descriptor for Camera Motion Estimation." Journal of the Institute of Industrial Applications Engineers 5, no. 2 (April 25, 2017): 90–99. http://dx.doi.org/10.12792/jiiae.5.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kwon, Soon-Kak, and Seong-Woo Kim. "Motion Estimation Method by Using Depth Camera." Journal of Broadcast Engineering 17, no. 4 (July 30, 2012): 676–83. http://dx.doi.org/10.5909/jbe.2012.17.4.676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lv Yao-wen, 吕耀文, 王建立 WANG Jian-li, 王昊京 WANG Hao-jing, 刘维 LIU Wei, 吴量 WU Liang, and 曹景太 CAO Jing-tai. "Estimation of camera poses by parabolic motion." Optics and Precision Engineering 22, no. 4 (2014): 1078–85. http://dx.doi.org/10.3788/ope.20142204.1078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nilsson, Emil, Christian Lundquist, Thomas B. Schön, David Forslund, and Jacob Roll. "Vehicle Motion Estimation Using an Infrared Camera." IFAC Proceedings Volumes 44, no. 1 (January 2011): 12952–57. http://dx.doi.org/10.3182/20110828-6-it-1002.03037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Jiexiong, John Folkesson, and Patric Jensfelt. "Geometric Correspondence Network for Camera Motion Estimation." IEEE Robotics and Automation Letters 3, no. 2 (April 2018): 1010–17. http://dx.doi.org/10.1109/lra.2018.2794624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Özyeşil, Onur, Amit Singer, and Ronen Basri. "Stable Camera Motion Estimation Using Convex Programming." SIAM Journal on Imaging Sciences 8, no. 2 (January 2015): 1220–62. http://dx.doi.org/10.1137/140977576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jonchery, C., F. Dibos, and G. Koepfler. "Camera Motion Estimation Through Planar Deformation Determination." Journal of Mathematical Imaging and Vision 32, no. 1 (April 12, 2008): 73–87. http://dx.doi.org/10.1007/s10851-008-0086-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ioannidis, Antonis, Vasileios Chasanis, and Aristidis Likas. "Camera Motion Detection Through Frame Splitting and Combination of Region-Based Motion Signals." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 09 (May 27, 2018): 1855015. http://dx.doi.org/10.1142/s0218001418550157.

Full text
Abstract:
Most of the existing approaches for camera motion detection are based on optical flow analysis and the use of the affine motion model. However, these methods are computationally expensive due to the cost of optical flow estimation and may be inefficient in the presence of moving objects whose motion is independent of the camera motion. We present an effective approach to detect camera motions by considering four trapezoidal regions in each frame and computing the horizontal and vertical translations of those regions. Then, simple decision rules based on the translations of the regions are employed in order to decide for the existence and the type of camera motion in each frame. In this way, three signals are constructed (pan, tilt, zoom) which are subsequently filtered to improve the robustness of the method. Comparative experiments on a variety of videos indicate that our method efficiently detects any type of camera motion (pan, tilt, zoom), even in the case where moving objects exist in the video sequence.
APA, Harvard, Vancouver, ISO, and other styles
12

FAUGERAS, O. D., and F. LUSTMAN. "MOTION AND STRUCTURE FROM MOTION IN A PIECEWISE PLANAR ENVIRONMENT." International Journal of Pattern Recognition and Artificial Intelligence 02, no. 03 (September 1988): 485–508. http://dx.doi.org/10.1142/s0218001488000285.

Full text
Abstract:
We show in this article that when the environment is piecewise linear, it provides a powerful constraint on the kind of matches that exist between two images of the scene when the camera motion is unknown. For points and lines located in the same plane, the correspondence between the two cameras is a collineation. We show that the unknowns (the camera motion and the plane equation) can be recovered, in general, from an estimate of the matrix of this collineation. The two-fold ambiguity that remains can be removed by looking at a second plane, by taking a third view of the same plane, or by using a priori knowledge about the geometry of the plane being looked at. We then show how to combine the estimation of the matrix of collineation and the obtaining of point and line matches between the two images, by a strategy of Hypothesis Prediction and Testing guided by a Kalman filter. We finally show how our approach can be used to calibrate a system of cameras.
APA, Harvard, Vancouver, ISO, and other styles
13

Nuger, Evgeny, and Beno Benhabib. "A Methodology for Multi-Camera Surface-Shape Estimation of Deformable Unknown Objects." Robotics 7, no. 4 (November 11, 2018): 69. http://dx.doi.org/10.3390/robotics7040069.

Full text
Abstract:
A novel methodology is proposed herein to estimate the three-dimensional (3D) surface shape of unknown, markerless deforming objects through a modular multi-camera vision system. The methodology is a generalized formal approach to shape estimation for a priori unknown objects. Accurate shape estimation is accomplished through a robust, adaptive particle filtering process. The estimation process yields a set of surface meshes representing the expected deformation of the target object. The methodology is based on the use of a multi-camera system, with a variable number of cameras, and range of object motions. The numerous simulations and experiments presented herein demonstrate the proposed methodology’s ability to accurately estimate the surface deformation of unknown objects, as well as its robustness to object loss under self-occlusion, and varying motion dynamics.
APA, Harvard, Vancouver, ISO, and other styles
14

Kaichi, Tomoya, Tsubasa Maruyama, Mitsunori Tada, and Hideo Saito. "Resolving Position Ambiguity of IMU-Based Human Pose with a Single RGB Camera." Sensors 20, no. 19 (September 23, 2020): 5453. http://dx.doi.org/10.3390/s20195453.

Full text
Abstract:
Human motion capture (MoCap) plays a key role in healthcare and human–robot collaboration. Some researchers have combined orientation measurements from inertial measurement units (IMUs) and positional inference from cameras to reconstruct the 3D human motion. Their works utilize multiple cameras or depth sensors to localize the human in three dimensions. Such multiple cameras are not always available in our daily life, but just a single camera attached in a smart IP devices has recently been popular. Therefore, we present a 3D pose estimation approach from IMUs and a single camera. In order to resolve the depth ambiguity of the single camera configuration and localize the global position of the subject, we present a constraint which optimizes the foot-ground contact points. The timing and 3D positions of the ground contact are calculated from the acceleration of IMUs on foot and geometric transformation of foot position detected on image, respectively. Since the results of pose estimation is greatly affected by the failure of the detection, we design the image-based constraints to handle the outliers of positional estimates. We evaluated the performance of our approach on public 3D human pose dataset. The experiments demonstrated that the proposed constraints contributed to improve the accuracy of pose estimation in single and multiple camera setting.
APA, Harvard, Vancouver, ISO, and other styles
15

Özyeşil, Onur, Vladislav Voroninski, Ronen Basri, and Amit Singer. "A survey of structure from motion." Acta Numerica 26 (May 1, 2017): 305–64. http://dx.doi.org/10.1017/s096249291700006x.

Full text
Abstract:
The structure from motion (SfM) problem in computer vision is to recover the three-dimensional (3D) structure of a stationary scene from a set of projective measurements, represented as a collection of two-dimensional (2D) images, via estimation of motion of the cameras corresponding to these images. In essence, SfM involves the three main stages of (i) extracting features in images (e.g. points of interest, lines,etc.) and matching these features between images, (ii) camera motion estimation (e.g. using relative pairwise camera positions estimated from the extracted features), and (iii) recovery of the 3D structure using the estimated motion and features (e.g. by minimizing the so-calledreprojection error). This survey mainly focuses on relatively recent developments in the literature pertaining to stages (ii) and (iii). More specifically, after touching upon the early factorization-based techniques for motion and structure estimation, we provide a detailed account of some of the recent cameralocationestimation methods in the literature, followed by discussion of notable techniques for 3D structure recovery. We also cover the basics of thesimultaneous localization and mapping(SLAM) problem, which can be viewed as a specific case of the SfM problem. Further, our survey includes a review of the fundamentals of feature extraction and matching (i.e. stage (i) above), various recent methods for handling ambiguities in 3D scenes, SfM techniques involving relatively uncommon camera models and image features, and popular sources of data and SfM software.
APA, Harvard, Vancouver, ISO, and other styles
16

Minami, Mamoru, and Wei Song. "Hand-Eye Motion-Invariant Pose Estimation with Online 1-Step GA -3D Pose Tracking Accuracy Evaluation in Dynamic Hand-Eye Oscillation-." Journal of Robotics and Mechatronics 21, no. 6 (December 20, 2009): 709–19. http://dx.doi.org/10.20965/jrm.2009.p0709.

Full text
Abstract:
This paper presents online pose measurement for a 3-dimensional (3-D) object detected by stereo hand-eye cameras. Our proposal improves 3-D pose tracking accuracy by compensating for the fictional motion of the target in camera images stemming from the ego motion of the hand-eye camera caused by dynamic manipulator oscillation. This motion feed-forward (MFF) is combined into the evolutionary search of a genetic algorithm (GA) and fitness evaluation based on stereo model matching whose pose is expressed using a unit quaternion. The proposal’s effectiveness was confirmed in simulation tracking an object’s 3-D pose adversely affected by hand-eye camera oscillations induced by dynamic effects of robot motion.
APA, Harvard, Vancouver, ISO, and other styles
17

Jhan, Jyun-Ping, Jiann-Yeou Rau, and Chih-Ming Chou. "Underwater 3D Rigid Object Tracking and 6-DOF Estimation: A Case Study of Giant Steel Pipe Scale Model Underwater Installation." Remote Sensing 12, no. 16 (August 12, 2020): 2600. http://dx.doi.org/10.3390/rs12162600.

Full text
Abstract:
The Zengwen desilting tunnel project installed an Elephant Trunk Steel Pipe (ETSP) at the bottom of the reservoir that is designed to connect the new bypass tunnel and reach downward to the sediment surface. Since ETSP is huge and its underwater installation is an unprecedented construction method, there are several uncertainties in its dynamic motion changes during installation. To assure construction safety, a 1:20 ETSP scale model was built to simulate the underwater installation procedure, and its six-degrees-of-freedom (6-DOF) motion parameters were monitored by offline underwater 3D rigid object tracking and photogrammetry. Three cameras were used to form a multicamera system, and several auxiliary devices—such as waterproof housing, tripods, and a waterproof LED—were adopted to protect the cameras and to obtain clear images in the underwater environment. However, since it is difficult for the divers to position the camera and ensure the camera field of view overlap, each camera can only observe the head, middle, and tail parts of ETSP, respectively, leading to a small overlap area among all images. Therefore, it is not possible to perform a traditional method via multiple images forward intersection, where the camera’s positions and orientations have to be calibrated and fixed in advance. Instead, by tracking the 3D coordinates of ETSP and obtaining the camera orientation information via space resection, we propose a multicamera coordinate transformation and adopted a single-camera relative orientation transformation to calculate the 6-DOF motion parameters. The offline procedure is to first acquire the 3D coordinates of ETSP by taking multiposition images with a precalibrated camera in the air and then use the 3D coordinates as control points to perform the space resection of the calibrated underwater cameras. Finally, we calculated the 6-DOF of ETSP by using the camera orientation information through both multi- and single-camera approaches. In this study, we show the results of camera calibration in the air and underwater environment, present the 6-DOF motion parameters of ETSP underwater installation and the reconstructed 4D animation, and compare the differences between the multi- and single-camera approaches.
APA, Harvard, Vancouver, ISO, and other styles
18

Mansour, M., P. Davidson, O. A. Stepanov, J. P. Raunio, M. M. Aref, and R. Piché. "Depth estimation with ego-motion assisted monocular camera." Giroskopiya i Navigatsiya 27, no. 2 (2019): 28–51. http://dx.doi.org/10.17285/0869-7035.2019.27.2.028-051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yuan, Ding, Miao Liu, Jihao Yin, and Jiankun Hu. "Camera motion estimation through monocular normal flow vectors." Pattern Recognition Letters 52 (January 2015): 59–64. http://dx.doi.org/10.1016/j.patrec.2014.09.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mansour, M., P. Davidson, O. Stepanov, J. P. Raunio, M. M. Aref, and R. Piché. "Depth Estimation with Ego-Motion Assisted Monocular Camera." Gyroscopy and Navigation 10, no. 3 (July 2019): 111–23. http://dx.doi.org/10.1134/s2075108719030064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Yu, Y. K., K. H. Wong, M. M. Y. Chang, and S. H. Or. "Recursive Camera-Motion Estimation With the Trifocal Tensor." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 36, no. 5 (October 2006): 1081–90. http://dx.doi.org/10.1109/tsmcb.2006.874133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Weng, Ying, and Jianmin Jiang. "Fast camera motion estimation in MPEG compressed domain." IEEE Transactions on Consumer Electronics 57, no. 3 (August 2011): 1329–35. http://dx.doi.org/10.1109/tce.2011.6018891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Alkhatib, M. N., A. V. Bobkov, and N. M. Zadoroznaya. "Camera pose estimation based on structure from motion." Procedia Computer Science 186 (2021): 146–53. http://dx.doi.org/10.1016/j.procs.2021.04.205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Qiao, Xiaorui, Atsushi Yamashita, and Hajime Asama. "Underwater Structure from Motion for Cameras Under Refractive Surfaces." Journal of Robotics and Mechatronics 31, no. 4 (August 20, 2019): 603–11. http://dx.doi.org/10.20965/jrm.2019.p0603.

Full text
Abstract:
Structure from Motion (SfM), as a three-dimensional (3D) reconstruction technique, can estimate the structure of an object by using a single moving camera. Cameras deployed in underwater environments are generally confined to waterproof housings. Thus, the light rays entering the camera are refracted twice; once at the interface between the water and the camera housing, and again at the interface between the camera housing and air. Images captured from scenes in underwater environments are prone to, and deteriorate, from distortion caused by this refraction. Severe distortions in geometric reconstruction would be caused if the refractive distortion is not properly addressed. Here, we propose a SfM approach to deal with the refraction in a camera system including a refractive surface. The impact of light refraction is precisely modeled in the refractive model. Based on the model, a new calibration and camera pose estimation method is proposed. This proposed method assists in accurate 3D reconstruction using the refractive camera system. Experiments, including simulations and real images, show that the proposed method can achieve accurate reconstruction, and effectively reduce the refractive distortion compared to conventional SfM.
APA, Harvard, Vancouver, ISO, and other styles
25

Schmitt, Robert, and Yu Cai. "Single camera-based synchronisation within a concept of robotic assembly in motion." Assembly Automation 34, no. 2 (April 1, 2014): 160–68. http://dx.doi.org/10.1108/aa-04-2013-040.

Full text
Abstract:
Purpose – Automated robotic assembly on a moving workpiece, referred to as assembly in motion, demands that an assembly robot is synchronised in all degrees of freedom to the moving workpiece, on which assembly parts are installed. Currently, this requirement cannot be met due to the lack of robust estimation of 3D positions and the trajectory of the moving workpiece. The purpose of this paper is to develop a camera system that measures the 3D trajectory of the moving workpiece for robotic assembly in motion. Design/methodology/approach – For the trajectory estimation, an assembly robot-guided, monocular camera system is developed. The motion trajectory of a workpiece is estimated, as the trajectory is considered as a linear combination of trajectory bases, such as discrete cosine transform bases. Findings – The developed camera system for trajectory estimation is tested within the robotic assembly of a cylinder block in motion. The experimental results show that the proposed method is able to reconstruct arbitrary trajectories of an assembly point on a workpiece moving in 3D space. Research limitations/implications – With the developed technology, a point trajectory can be recovered offline only after all measurement images are acquired. For practical assembly tasks in real production, this method should be extended to determine the trajectory online during the motion of a workpiece. Practical implications – For practical, robotic assembly in motion, such as assembling tires, wheels and windscreens on conveyed vehicle bodies, the developed technology can be used for positioning a moving workpiece, which is in the distant field of an assembly robot. Originality/value – Besides laser trackers, indoor global positioning systems and stereo cameras, this paper provides a solution of trajectory estimation by using a monocular camera system.
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Delong, Xunyu Zhong, Dongbing Gu, Xiafu Peng, Gongliu Yang, and Chaosheng Zou. "Unsupervised learning of depth estimation, camera motion prediction and dynamic object localization from video." International Journal of Advanced Robotic Systems 17, no. 2 (March 1, 2020): 172988142090965. http://dx.doi.org/10.1177/1729881420909653.

Full text
Abstract:
Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular videos are fundamental but challenging research topics in computer vision. Deep learning has demonstrated an amazing performance for these tasks recently. This article presents a novel unsupervised deep learning framework for scene depth estimation, camera motion prediction and dynamic object localization from videos. Consecutive stereo image pairs are used to train the system while only monocular images are needed for inference. The supervisory signals for the training stage come from various forms of image synthesis. Due to the use of consecutive stereo video, both spatial and temporal photometric errors are used to synthesize the images. Furthermore, to relieve the impacts of occlusions, adaptive left-right consistency and forward-backward consistency losses are added to the objective function. Experimental results on the KITTI and Cityscapes datasets demonstrate that our method is more effective in depth estimation, camera motion prediction and dynamic object localization compared to previous models.
APA, Harvard, Vancouver, ISO, and other styles
27

Pełczyński, Paweł, Bartosz Ostrowski, and Dariusz Rzeszotarski. "Motion Vector Estimation of a Stereovision Camera with Inertial Sensors." Metrology and Measurement Systems 19, no. 1 (January 1, 2012): 141–50. http://dx.doi.org/10.2478/v10178-012-0013-z.

Full text
Abstract:
Motion Vector Estimation of a Stereovision Camera with Inertial SensorsThe aim of the presented work was the development of a tracking algorithm for a stereoscopic camera setup equipped with an additional inertial sensor. The input of the algorithm consists of the image sequence, angular velocity and linear acceleration vectors measured by the inertial sensor. The main assumption of the project was fusion of data streams from both sources to obtain more accurate ego-motion estimation. An electronic module for recording the inertial sensor data was built. Inertial measurements allowed a coarse estimation of the image motion field that has reduced its search range by standard image-based methods. Continuous tracking of the camera motion has been achieved (including moments of image information loss). Results of the presented study are being implemented in a currently developed obstacle avoidance system for visually impaired pedestrians.
APA, Harvard, Vancouver, ISO, and other styles
28

Shishido, Hidehiko, and Itaru Kitahara. "Calibration of multiple sparsely distributed cameras using a mobile camera." Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology 234, no. 1 (September 18, 2019): 37–48. http://dx.doi.org/10.1177/1754337119874276.

Full text
Abstract:
In sports science research, there are many topics that utilize the body motion of athletes extracted by motion capture system, since motion information is valuable data for improving an athlete’s skills. However, one of the unsolved challenges in motion capture is extraction of athletes’ motion information during the actual game or match, as placing markers on athletes is a challenge during game play. In this research, the authors propose a method for acquisition of motion information without attaching a marker, utilizing computer vision technology. In the proposed method, the three-dimensional world joint position of the athlete’s body can be acquired using just two cameras without any visual markers. Furthermore, the athlete’s three-dimensional joint position during game play can also be obtained without complicated preparations. Camera calibration that estimates the projective relationship between three-dimensional world and two-dimensional image spaces is one of the principal processes for the respective three-dimensional image processing, such as three-dimensional reconstruction and three-dimensional tracking. A strong-calibration method, which needs to set up landmarks with known three-dimensional positions, is a common technique. However, as the target space expands, landmark placement becomes increasingly complicated. Although a weak-calibration method does not need known landmarks, the estimation precision depends on the accuracy of the correspondence between image captures. When multiple cameras are arranged sparsely, sufficient detection of corresponding points is difficult. In this research, the authors propose a calibration method that bridges multiple sparsely distributed cameras using mobile camera images. Appropriate spacing was confirmed between the images through comparative experiments evaluating camera calibration accuracy by changing the number of bridging images. Furthermore, the proposed method was applied to multiple capturing experiments in a large-scale space to verify its robustness. As a relevant example, the proposed method was applied to the three-dimensional skeleton estimation of badminton players. Subsequently, a quantitative evaluation was conducted on camera calibration for the three-dimensional skeleton. The reprojection error of each part of the skeletons and standard deviations were approximately 2.72 and 0.81 mm, respectively, confirming that the proposed method was highly accurate when applied to camera calibration. Consequently, a quantitative evaluation was conducted on the proposed calibration method and a calibration method using the coordinates of eight manual points. In conclusion, the proposed method stabilizes calibration accuracy in the vertical direction of the world coordinate system.
APA, Harvard, Vancouver, ISO, and other styles
29

Lei, Shao Shuai, Chang Qing Cao, and G. Xie. "A Qualitative Approach to Camera Motion Classification." Advanced Materials Research 562-564 (August 2012): 1887–90. http://dx.doi.org/10.4028/www.scientific.net/amr.562-564.1887.

Full text
Abstract:
This paper proposed a robust and hierarchical camera motion classification approach based on integrated histogram. First, a motion direction histogram is built, and the information entropy of histogram is employed to classify camera motion operations into scaling operation and non-scaling operation. And then, static and translation operations are distinguished by distribu- tion information of histogram, meanwhile the direction of translation operation is also identified. Experimental results show that the new approach can efficiently achieve global motion estimation.
APA, Harvard, Vancouver, ISO, and other styles
30

Yamaguchi, Koichiro, Takeo Kato, and Yoshiki Ninomiya. "Ego-Motion Estimation Using a Vehicle Mounted Monocular Camera." IEEJ Transactions on Electronics, Information and Systems 129, no. 12 (2009): 2213–21. http://dx.doi.org/10.1541/ieejeiss.129.2213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Liang, Xuefeng, Cuicui Zhang, and Takashi Matsuyama. "A General Inlier Estimation for Moving Camera Motion Segmentation." IPSJ Transactions on Computer Vision and Applications 7 (2015): 163–74. http://dx.doi.org/10.2197/ipsjtcva.7.163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Joong-Jae, Gye-Young Kim, and Hyung-Il Choi. "Robust Estimation of Camera Motion using Fuzzy Classification Method." KIPS Transactions:PartB 13B, no. 7 (December 31, 2006): 671–78. http://dx.doi.org/10.3745/kipstb.2006.13b.7.671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Srinivasan, M. V., S. Venkatesh, and R. Hosie. "Qualitative estimation of camera motion parameters from video sequences." Pattern Recognition 30, no. 4 (April 1997): 593–606. http://dx.doi.org/10.1016/s0031-3203(96)00106-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

FUJIMOTO, Hironori, Jun MIURA, and Shuji OISHI. "Motion Estimation and Obstacle Detection Using a Stereo Camera." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2018 (2018): 1A1—L08. http://dx.doi.org/10.1299/jsmermd.2018.1a1-l08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Dan, Danya Song, Shuang Liu, Junwen Ji, Kang Zeng, Yingsong Hu, and Hefei Ling. "Camera pose estimation based on global structure from motion." Multimedia Tools and Applications 79, no. 31-32 (June 7, 2020): 23223–42. http://dx.doi.org/10.1007/s11042-020-09045-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, Wen-Yan, Loong-Fah Cheong, Ping Tan, Guo Dong, and Siying Liu. "Simultaneous Camera Pose and Correspondence Estimation with Motion Coherence." International Journal of Computer Vision 96, no. 2 (May 11, 2011): 145–61. http://dx.doi.org/10.1007/s11263-011-0456-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Alenyà, G., and C. Torras. "Camera motion estimation by tracking contour deformation: Precision analysis." Image and Vision Computing 28, no. 3 (March 2010): 474–90. http://dx.doi.org/10.1016/j.imavis.2009.07.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Tumurbaatar, Tserennadmid, and Taejung Kim. "Comparative Study of Relative-Pose Estimations from a Monocular Image Sequence in Computer Vision and Photogrammetry." Sensors 19, no. 8 (April 22, 2019): 1905. http://dx.doi.org/10.3390/s19081905.

Full text
Abstract:
Techniques for measuring the position and orientation of an object from corresponding images are based on the principles of epipolar geometry in the computer vision and photogrammetric fields. Contributing to their importance, many different approaches have been developed in computer vision, increasing the automation of the pure photogrammetric processes. The aim of this paper is to evaluate the main differences between photogrammetric and computer vision approaches for the pose estimation of an object from image sequences, and how these have to be considered in the choice of processing technique when using a single camera. The use of a single camera in consumer electronics has enormously increased, even though most 3D user interfaces require additional devices to sense 3D motion for their input. In this regard, using a monocular camera to determine 3D motion is unique. However, we argue that relative pose estimations from monocular image sequences have not been studied thoroughly by comparing both photogrammetry and computer vision methods. To estimate motion parameters characterized by 3D rotation and 3D translations, estimation methods developed in the computer vision and photogrammetric fields are implemented. This paper describes a mathematical motion model for the proposed approaches, by differentiating their geometric properties and estimations of the motion parameters. A precision analysis is conducted to investigate the main characteristics of the methods in both fields. The results of the comparison indicate the differences between the estimations in both fields, in terms of accuracy and the test dataset. We show that homography-based approaches are more accurate than essential-matrix or relative orientation–based approaches under noisy conditions.
APA, Harvard, Vancouver, ISO, and other styles
39

Valiente García, David, Lorenzo Fernández Rojo, Arturo Gil Aparicio, Luis Payá Castelló, and Oscar Reinoso García. "Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images." Journal of Robotics 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/797063.

Full text
Abstract:
In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.
APA, Harvard, Vancouver, ISO, and other styles
40

Terabayashi, Kenji, Hisanori Mitsumoto, Toru Morita, Yohei Aragaki, Noriko Shimomura, and Kazunori Umeda. "Measurement of Three-Dimensional Environment with a Fish-Eye Camera Based on Structure from Motion - Error Analysis." Journal of Robotics and Mechatronics 21, no. 6 (December 20, 2009): 680–88. http://dx.doi.org/10.20965/jrm.2009.p0680.

Full text
Abstract:
This paper proposes a method for measuring 3-dimensional (3D) environment and estimating camera movement with two fish-eye images. This method deals with large distortion of images from a fish-eye camera to calibrate internal and external camera parameters precisely by simultaneous estimation. In this paper, we analyze 3D measurement accuracy based on a theoretical model and evaluate it in practical analysis in experimental and real environments. These analyses show that the theoretical measurement error model works over a wide range of fish-eye views.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhong, Yang Jun, and Shui Ping Zhang. "Robust Motion Objects Detection in Moving Camera Scenes." Advanced Materials Research 143-144 (October 2010): 782–86. http://dx.doi.org/10.4028/www.scientific.net/amr.143-144.782.

Full text
Abstract:
This paper presents an efficient method for motion objects detection in dynamic background.Three main aspects compose the proposed method.Firstly,abundant SIFT matching pairs were obtained from two successive frames.Secondly, a robust global motion estimation method based on SIFT matching pairs was presented.Finally, the moving objects could be detected after we compensate the global motion.Evaluations based on extensive experiments have shown that the proposed method can achieve the motion detection in dynamic backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
42

WANG, J., N. V. PATEL, W. I. GROSKY, and F. FOTOUHI. "MOVING CAMERA MOVING OBJECT SEGMENTATION IN COMPRESSED VIDEO SEQUENCES." International Journal of Image and Graphics 09, no. 04 (October 2009): 609–27. http://dx.doi.org/10.1142/s0219467809003617.

Full text
Abstract:
In this paper, we address the problem of camera and object motion detection in the compressed domain. The estimation of camera motion and the moving object segmentation have been widely stated in a variety of context for video analysis, due to their capabilities of providing essential clues for interpreting the high-level semantics of video sequences. A novel compressed domain motion estimation and segmentation scheme is presented and applied in this paper. MPEG-2 compressed domain information, namely Motion Vectors (MV) and Discrete Cosine Transform (DCT) coefficients, is filtered and manipulated to obtain a dense and reliable Motion Vector Field (MVF) over consecutive frames. An iterative segmentation scheme based upon the generalized affine transformation model is exploited to effect the global camera motion detection. The foreground spatiotemporal objects are separated from the background using the temporal consistency check to the output of the iterative segmentation. This consistency check process can coalesce the resulting foreground blocks and weed out unqualified blocks. Illustrative examples are provided to demonstrate the efficacy of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
43

Hu, Zhencheng, and Kefichi Uchimura. "Dynamical Estimation of Camera Motion Parameters Using Focus of Expansion." IFAC Proceedings Volumes 32, no. 2 (July 1999): 8422–27. http://dx.doi.org/10.1016/s1474-6670(17)57436-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Jeong, Woojin, Jin Wook Park, Jong Min Lee, Tae Eun Song, Wonju Choi, and Young Shik Moon. "Video Deblurring using Camera Motion Estimation and Patch-wise Deconvolution." Journal of the Institute of Electronics and Information Engineers 51, no. 12 (December 25, 2014): 130–39. http://dx.doi.org/10.5573/ieie.2014.51.12.130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Changhun Sung and Myung Jin Chung. "Multi-Scale Descriptor for Robust and Fast Camera Motion Estimation." IEEE Signal Processing Letters 20, no. 7 (July 2013): 725–28. http://dx.doi.org/10.1109/lsp.2013.2264672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tumurbaatar, Tserennadmid, and Taejung Kim. "Development of real-time object motion estimation from single camera." Spatial Information Research 25, no. 5 (August 18, 2017): 647–56. http://dx.doi.org/10.1007/s41324-017-0130-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Matsuhisa, Ryota, Shintaro Ono, Hiroshi Kawasaki, Atsuhiko Banno, and Katsushi Ikeuchi. "Image-Based Ego-Motion Estimation Using On-Vehicle Omnidirectional Camera." International Journal of Intelligent Transportation Systems Research 8, no. 2 (April 20, 2010): 106–17. http://dx.doi.org/10.1007/s13177-010-0011-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jobbágy, Á., and E. H. Furnée. "Marker centre estimation algorithms in CCD camera-based motion analysis." Medical & Biological Engineering & Computing 32, no. 1 (January 1994): 85–91. http://dx.doi.org/10.1007/bf02512484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Wei, Wenxiao Shi, Yaowen Lv, Jingtai Cao, Yumei Yin, Yuanhao Wu, Jihong Wang, and Xuefen Chi. "A novel method of camera pose estimation by parabolic motion." Optik 124, no. 24 (December 2013): 6840–45. http://dx.doi.org/10.1016/j.ijleo.2013.05.074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Hong, Ning Pan, Heng Lu, Enmin Song, Qian Wang, and Chih-Cheng Hung. "Wireless Capsule Endoscopy Video Reduction Based on Camera Motion Estimation." Journal of Digital Imaging 26, no. 2 (August 7, 2012): 287–301. http://dx.doi.org/10.1007/s10278-012-9519-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography