Academic literature on the topic 'Camera Ego-Motion Estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Camera Ego-Motion Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Camera Ego-Motion Estimation"

1

Mansour, M., P. Davidson, O. A. Stepanov, J. P. Raunio, M. M. Aref, and R. Piché. "Depth estimation with ego-motion assisted monocular camera." Giroskopiya i Navigatsiya 27, no. 2 (2019): 28–51. http://dx.doi.org/10.17285/0869-7035.2019.27.2.028-051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mansour, M., P. Davidson, O. Stepanov, J. P. Raunio, M. M. Aref, and R. Piché. "Depth Estimation with Ego-Motion Assisted Monocular Camera." Gyroscopy and Navigation 10, no. 3 (2019): 111–23. http://dx.doi.org/10.1134/s2075108719030064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Linok, S. A., and D. A. Yudin. "Influence of Neural Network Receptive Field on Monocular Depth and Ego-Motion Estimation." Optical Memory and Neural Networks 32, S2 (2023): S206—S213. http://dx.doi.org/10.3103/s1060992x23060103.

Full text
Abstract:
Abstract We present an analysis of a self-supervised learning approach for monocular depth and ego-motion estimation. This is an important problem for computer vision systems of robots, autonomous vehicles and other intelligent agents, equipped only with monocular camera sensor. We have explored a number of neural network architectures that perform single-frame depth and multi-frame camera pose predictions to minimize photometric error between consecutive frames on a sequence of camera images. Unlike other existing works, our proposed approach called ERF-SfMLearner examines the influence of th
APA, Harvard, Vancouver, ISO, and other styles
4

Yusuf, Sait Erdem, Galip Feyza, Furkan Ince Ibrahim, and Haidar Sharif Md. "Estimation of Camera Ego-Motion for Real-Time Computer Vision Applications." International Journal of Scientific Research in Information Systems and Engineering (IJSRISE) 1, no. 2 (2015): 115–20. https://doi.org/10.5281/zenodo.836175.

Full text
Abstract:
How can we distinguish the scene motion from the camera motion? In most of the computer vision applications, camera movements avoid true detection of events. For instance, if the camera is trembling due to wind in outdoor environments, it creates high frequency motion in the scene motion as well. To detect flame, we use high frequency motion. If camera trembles, then non-flame regions can also be detected as flame due to high frequency camera motion. Consequently, it is essential to detect the camera motion and avoid event detection (e.g., flame detection) when the camera is moving. In this pa
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Haiwen, Jin Chen, Zhuohuai Guan, Yaoming Li, Kai Cheng, and Zhihong Cui. "Stereovision-Based Ego-Motion Estimation for Combine Harvesters." Sensors 22, no. 17 (2022): 6394. http://dx.doi.org/10.3390/s22176394.

Full text
Abstract:
Ego-motion estimation is a foundational capability for autonomous combine harvesters, supporting high-level functions such as navigation and harvesting. This paper presents a novel approach for estimating the motion of a combine harvester from a sequence of stereo images. The proposed method starts with tracking a set of 3D landmarks which are triangulated from stereo-matched features. Six Degree of Freedom (DoF) ego motion is obtained by minimizing the reprojection error of those landmarks on the current frame. Then, local bundle adjustment is performed to refine structure (i.e., landmark pos
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Jiaman, C. Karen Liu, and Jiajun Wu. "Ego-Body Pose Estimation via Ego-Head Pose Estimation." AI Matters 9, no. 2 (2023): 20–23. http://dx.doi.org/10.1145/3609468.3609473.

Full text
Abstract:
Estimating 3D human motion from an ego-centric video, which records the environment viewed from the first-person perspective with a front-facing monocular camera, is critical to applications in VR/AR. However, naively learning a mapping between egocentric videos and full-body human motions is challenging for two reasons. First, modeling this complex relationship is difficult; unlike reconstruction motion from third-person videos, the human body is often out of view of an egocentric video. Second, learning this mapping requires a large-scale, diverse dataset containing paired egocentric videos
APA, Harvard, Vancouver, ISO, and other styles
7

Yamaguchi, Koichiro, Takeo Kato, and Yoshiki Ninomiya. "Ego-Motion Estimation Using a Vehicle Mounted Monocular Camera." IEEJ Transactions on Electronics, Information and Systems 129, no. 12 (2009): 2213–21. http://dx.doi.org/10.1541/ieejeiss.129.2213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Minami, Mamoru, and Wei Song. "Hand-Eye Motion-Invariant Pose Estimation with Online 1-Step GA -3D Pose Tracking Accuracy Evaluation in Dynamic Hand-Eye Oscillation-." Journal of Robotics and Mechatronics 21, no. 6 (2009): 709–19. http://dx.doi.org/10.20965/jrm.2009.p0709.

Full text
Abstract:
This paper presents online pose measurement for a 3-dimensional (3-D) object detected by stereo hand-eye cameras. Our proposal improves 3-D pose tracking accuracy by compensating for the fictional motion of the target in camera images stemming from the ego motion of the hand-eye camera caused by dynamic manipulator oscillation. This motion feed-forward (MFF) is combined into the evolutionary search of a genetic algorithm (GA) and fitness evaluation based on stereo model matching whose pose is expressed using a unit quaternion. The proposal’s effectiveness was confirmed in simulation tracking a
APA, Harvard, Vancouver, ISO, and other styles
9

Czech, Phillip, Markus Braun, Ulrich Kreßel, and Bin Yang. "Behavior-Aware Pedestrian Trajectory Prediction in Ego-Centric Camera Views with Spatio-Temporal Ego-Motion Estimation." Machine Learning and Knowledge Extraction 5, no. 3 (2023): 957–78. http://dx.doi.org/10.3390/make5030050.

Full text
Abstract:
With the ongoing development of automated driving systems, the crucial task of predicting pedestrian behavior is attracting growing attention. The prediction of future pedestrian trajectories from the ego-vehicle camera perspective is particularly challenging due to the dynamically changing scene. Therefore, we present Behavior-Aware Pedestrian Trajectory Prediction (BA-PTP), a novel approach to pedestrian trajectory prediction for ego-centric camera views. It incorporates behavioral features extracted from real-world traffic scene observations such as the body and head orientation of pedestri
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Lili, Wan Luo, Zhengmao Yan, and Wenhui Zhou. "Rigid-aware self-supervised GAN for camera ego-motion estimation." Digital Signal Processing 126 (June 2022): 103471. http://dx.doi.org/10.1016/j.dsp.2022.103471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Camera Ego-Motion Estimation"

1

Lee, Hong Yun. "Deep Learning for Visual-Inertial Odometry: Estimation of Monocular Camera Ego-Motion and its Uncertainty." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu156331321922759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, i
APA, Harvard, Vancouver, ISO, and other styles
3

E, Yen-Chi, and 鄂彥齊. "Ego-motion Estimation Based on RGB-D Camera and Inertial Sensor." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/01115649228044152260.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊網路與多媒體研究所<br>103<br>Ego-motion estimation has a wide variety of applications in robot control and automation. Proper local estimation of ego-motion benefits to recognize surrounding environment and recover the trajectory traversed for autonomous robot. In this thesis, we present a system that estimates ego-motion by fusing key frame based visual odometry and inertial measurements. The hardware of the system includes a RGB-D camera for capturing color and depth images and an Inertial Measurement Unit (IMU) for acquiring inertial measurements. Motion of camera between two conse
APA, Harvard, Vancouver, ISO, and other styles
4

Hong, Kai-Chen, and 洪楷宸. "The Study of Ego-motion Estimation for a Moving Object with Monocular Camera using Visual Odometry." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/x64wny.

Full text
Abstract:
碩士<br>國立交通大學<br>電控工程研究所<br>107<br>Visual odometry is the process of estimating the ego-motion of a moving object. In other words, visual odometry is the process of determining the position of a moving object. Then, the SLAM system is considered to be the best method for spatial positioning technology in the visual field. However, the SLAM system is quite large (the front-end: visual odometry, the back-end: optimization of the ego-motion estimation error), If the system need to perform other arithmetic processing at the same time, it will face challenges in terms of real-time. There are two con
APA, Harvard, Vancouver, ISO, and other styles
5

Tsao, An-Ting, and 曹安廷. "Ego-motion estimation using optical flow fields observed from multiple cameras." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/53743692521793352486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Chih-Ting, and 陳芝婷. "Ego Motion Estimation in a Scene with Moving Objects Using Stereo Cameras." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/75105273156349053462.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Camera Ego-Motion Estimation"

1

Svoboda, Tomáš, and Peter Sturm. "A badly calibrated camera in ego-motion estimation — propagation of uncertainty." In Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Villaverde, Ivan, Zelmar Echegoyen, and Manuel Graña. "Neuro-Evolutive System for Ego-Motion Estimation with a 3D Camera." In Advances in Neuro-Information Processing. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02490-0_124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Lingxi. "A review on camera ego-motion estimation methods based on optical flow for robotics." In Automotive, Mechanical and Electrical Engineering. CRC Press, 2017. http://dx.doi.org/10.1201/9781315210445-119.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Camera Ego-Motion Estimation"

1

Ratshidaho, Terence, Jules Raymond Tapamo, Jonathan Claassens, and Natasha Govender. "ToF camera ego-motion estimation." In 2012 5th Robotics and Mechatronics Conference of South Africa (ROBMECH). IEEE, 2012. http://dx.doi.org/10.1109/robomech.2012.6558458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yuan, Ding, and Yalong Yu. "A new method on camera ego-motion estimation." In 2013 6th International Congress on Image and Signal Processing (CISP). IEEE, 2013. http://dx.doi.org/10.1109/cisp.2013.6745247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stürzl, Wolfgang, D. Burschka, and M. Suppa. "Monocular ego-motion estimation with a compact omnidirectional camera." In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iros.2010.5649970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tian, Yi, and Juan Andrade-Cetto. "Spiking Neural Network for Event camera ego-motion estimation." In Materials for Sustainable Development Conference (MAT-SUS). FUNDACIO DE LA COMUNITAT VALENCIANA SCITO, 2022. http://dx.doi.org/10.29363/nanoge.nfm.2022.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Effendi, Sutono, and Ray Jarvis. "Camera Ego-Motion Estimation Using Phase Correlation under Planar Motion Constraint." In 2010 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2010). IEEE, 2010. http://dx.doi.org/10.1109/dicta.2010.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cocoma-Ortega, Jose, and Jose Martinez-Carranza. "Towards fast ego-motion estimation using a two-stream network." In LatinX in AI at Computer Vision and Pattern Recognition Conference 2022. Journal of LatinX in AI Research, 2022. http://dx.doi.org/10.52591/lxai202206241.

Full text
Abstract:
Autonomous navigation is a challenging task that requires solving individual problems such as the camera pose. Even more, if agile motion is performed in the navigation, the problem becomes complex due to the necessity to know the pose (trajectory generated) as fast as possible to continue with the agile motion. Several works have proposed approaches based on geometric algorithms and deep learning-based solutions that reduce error in estimation. Still, the prediction is performed at 30Hz on average in most cases. It is desirable to increase the frequency of operation to allow cameras with high
APA, Harvard, Vancouver, ISO, and other styles
7

Hayakawa, Jun, and Behzad Dariush. "Ego-motion and Surrounding Vehicle State Estimation Using a Monocular Camera." In 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019. http://dx.doi.org/10.1109/ivs.2019.8814037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yamaguchi, K., T. Kato, and Y. Ninomiya. "Vehicle Ego-Motion Estimation and Moving Object Detection using a Monocular Camera." In 18th International Conference on Pattern Recognition (ICPR'06). IEEE, 2006. http://dx.doi.org/10.1109/icpr.2006.1165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ohr, Florian M., Thusitha Parakrama, and W. Rosenstiel. "Model based estimation of ego-motion and road plane using vehicle-mounted camera." In 2013 16th International IEEE Conference on Intelligent Transportation Systems - (ITSC 2013). IEEE, 2013. http://dx.doi.org/10.1109/itsc.2013.6728297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Shiyan, and Huimin Yu. "A variational approach for ego-motion estimation and segmentation based on 3D TOF camera." In 2011 4th International Congress on Image and Signal Processing (CISP). IEEE, 2011. http://dx.doi.org/10.1109/cisp.2011.6100402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!