Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Camera Ego-Motion Estimation.

Artykuły w czasopismach na temat „Camera Ego-Motion Estimation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Camera Ego-Motion Estimation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Mansour, M., P. Davidson, O. A. Stepanov, J. P. Raunio, M. M. Aref, and R. Piché. "Depth estimation with ego-motion assisted monocular camera." Giroskopiya i Navigatsiya 27, no. 2 (2019): 28–51. http://dx.doi.org/10.17285/0869-7035.2019.27.2.028-051.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Mansour, M., P. Davidson, O. Stepanov, J. P. Raunio, M. M. Aref, and R. Piché. "Depth Estimation with Ego-Motion Assisted Monocular Camera." Gyroscopy and Navigation 10, no. 3 (2019): 111–23. http://dx.doi.org/10.1134/s2075108719030064.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Linok, S. A., and D. A. Yudin. "Influence of Neural Network Receptive Field on Monocular Depth and Ego-Motion Estimation." Optical Memory and Neural Networks 32, S2 (2023): S206—S213. http://dx.doi.org/10.3103/s1060992x23060103.

Pełny tekst źródła
Streszczenie:
Abstract We present an analysis of a self-supervised learning approach for monocular depth and ego-motion estimation. This is an important problem for computer vision systems of robots, autonomous vehicles and other intelligent agents, equipped only with monocular camera sensor. We have explored a number of neural network architectures that perform single-frame depth and multi-frame camera pose predictions to minimize photometric error between consecutive frames on a sequence of camera images. Unlike other existing works, our proposed approach called ERF-SfMLearner examines the influence of th
Style APA, Harvard, Vancouver, ISO itp.
4

Yusuf, Sait Erdem, Galip Feyza, Furkan Ince Ibrahim, and Haidar Sharif Md. "Estimation of Camera Ego-Motion for Real-Time Computer Vision Applications." International Journal of Scientific Research in Information Systems and Engineering (IJSRISE) 1, no. 2 (2015): 115–20. https://doi.org/10.5281/zenodo.836175.

Pełny tekst źródła
Streszczenie:
How can we distinguish the scene motion from the camera motion? In most of the computer vision applications, camera movements avoid true detection of events. For instance, if the camera is trembling due to wind in outdoor environments, it creates high frequency motion in the scene motion as well. To detect flame, we use high frequency motion. If camera trembles, then non-flame regions can also be detected as flame due to high frequency camera motion. Consequently, it is essential to detect the camera motion and avoid event detection (e.g., flame detection) when the camera is moving. In this pa
Style APA, Harvard, Vancouver, ISO itp.
5

Chen, Haiwen, Jin Chen, Zhuohuai Guan, Yaoming Li, Kai Cheng, and Zhihong Cui. "Stereovision-Based Ego-Motion Estimation for Combine Harvesters." Sensors 22, no. 17 (2022): 6394. http://dx.doi.org/10.3390/s22176394.

Pełny tekst źródła
Streszczenie:
Ego-motion estimation is a foundational capability for autonomous combine harvesters, supporting high-level functions such as navigation and harvesting. This paper presents a novel approach for estimating the motion of a combine harvester from a sequence of stereo images. The proposed method starts with tracking a set of 3D landmarks which are triangulated from stereo-matched features. Six Degree of Freedom (DoF) ego motion is obtained by minimizing the reprojection error of those landmarks on the current frame. Then, local bundle adjustment is performed to refine structure (i.e., landmark pos
Style APA, Harvard, Vancouver, ISO itp.
6

Li, Jiaman, C. Karen Liu, and Jiajun Wu. "Ego-Body Pose Estimation via Ego-Head Pose Estimation." AI Matters 9, no. 2 (2023): 20–23. http://dx.doi.org/10.1145/3609468.3609473.

Pełny tekst źródła
Streszczenie:
Estimating 3D human motion from an ego-centric video, which records the environment viewed from the first-person perspective with a front-facing monocular camera, is critical to applications in VR/AR. However, naively learning a mapping between egocentric videos and full-body human motions is challenging for two reasons. First, modeling this complex relationship is difficult; unlike reconstruction motion from third-person videos, the human body is often out of view of an egocentric video. Second, learning this mapping requires a large-scale, diverse dataset containing paired egocentric videos
Style APA, Harvard, Vancouver, ISO itp.
7

Yamaguchi, Koichiro, Takeo Kato, and Yoshiki Ninomiya. "Ego-Motion Estimation Using a Vehicle Mounted Monocular Camera." IEEJ Transactions on Electronics, Information and Systems 129, no. 12 (2009): 2213–21. http://dx.doi.org/10.1541/ieejeiss.129.2213.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Minami, Mamoru, and Wei Song. "Hand-Eye Motion-Invariant Pose Estimation with Online 1-Step GA -3D Pose Tracking Accuracy Evaluation in Dynamic Hand-Eye Oscillation-." Journal of Robotics and Mechatronics 21, no. 6 (2009): 709–19. http://dx.doi.org/10.20965/jrm.2009.p0709.

Pełny tekst źródła
Streszczenie:
This paper presents online pose measurement for a 3-dimensional (3-D) object detected by stereo hand-eye cameras. Our proposal improves 3-D pose tracking accuracy by compensating for the fictional motion of the target in camera images stemming from the ego motion of the hand-eye camera caused by dynamic manipulator oscillation. This motion feed-forward (MFF) is combined into the evolutionary search of a genetic algorithm (GA) and fitness evaluation based on stereo model matching whose pose is expressed using a unit quaternion. The proposal’s effectiveness was confirmed in simulation tracking a
Style APA, Harvard, Vancouver, ISO itp.
9

Czech, Phillip, Markus Braun, Ulrich Kreßel, and Bin Yang. "Behavior-Aware Pedestrian Trajectory Prediction in Ego-Centric Camera Views with Spatio-Temporal Ego-Motion Estimation." Machine Learning and Knowledge Extraction 5, no. 3 (2023): 957–78. http://dx.doi.org/10.3390/make5030050.

Pełny tekst źródła
Streszczenie:
With the ongoing development of automated driving systems, the crucial task of predicting pedestrian behavior is attracting growing attention. The prediction of future pedestrian trajectories from the ego-vehicle camera perspective is particularly challenging due to the dynamically changing scene. Therefore, we present Behavior-Aware Pedestrian Trajectory Prediction (BA-PTP), a novel approach to pedestrian trajectory prediction for ego-centric camera views. It incorporates behavioral features extracted from real-world traffic scene observations such as the body and head orientation of pedestri
Style APA, Harvard, Vancouver, ISO itp.
10

Lin, Lili, Wan Luo, Zhengmao Yan, and Wenhui Zhou. "Rigid-aware self-supervised GAN for camera ego-motion estimation." Digital Signal Processing 126 (June 2022): 103471. http://dx.doi.org/10.1016/j.dsp.2022.103471.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Matsuhisa, Ryota, Shintaro Ono, Hiroshi Kawasaki, Atsuhiko Banno, and Katsushi Ikeuchi. "Image-Based Ego-Motion Estimation Using On-Vehicle Omnidirectional Camera." International Journal of Intelligent Transportation Systems Research 8, no. 2 (2010): 106–17. http://dx.doi.org/10.1007/s13177-010-0011-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Pełczyński, Paweł, Bartosz Ostrowski, and Dariusz Rzeszotarski. "Motion Vector Estimation of a Stereovision Camera with Inertial Sensors." Metrology and Measurement Systems 19, no. 1 (2012): 141–50. http://dx.doi.org/10.2478/v10178-012-0013-z.

Pełny tekst źródła
Streszczenie:
Motion Vector Estimation of a Stereovision Camera with Inertial SensorsThe aim of the presented work was the development of a tracking algorithm for a stereoscopic camera setup equipped with an additional inertial sensor. The input of the algorithm consists of the image sequence, angular velocity and linear acceleration vectors measured by the inertial sensor. The main assumption of the project was fusion of data streams from both sources to obtain more accurate ego-motion estimation. An electronic module for recording the inertial sensor data was built. Inertial measurements allowed a coarse
Style APA, Harvard, Vancouver, ISO itp.
13

Sharma, Alisha, Ryan Nett, and Jonathan Ventura. "Unsupervised Learning of Depth and Ego-Motion from Cylindrical Panoramic Video with Applications for Virtual Reality." International Journal of Semantic Computing 14, no. 03 (2020): 333–56. http://dx.doi.org/10.1142/s1793351x20400139.

Pełny tekst źródła
Streszczenie:
We introduce a convolutional neural network model for unsupervised learning of depth and ego-motion from cylindrical panoramic video. Panoramic depth estimation is an important technology for applications such as virtual reality, 3D modeling, and autonomous robotic navigation. In contrast to previous approaches for applying convolutional neural networks to panoramic imagery, we use the cylindrical panoramic projection which allows for the use of the traditional CNN layers such as convolutional filters and max pooling without modification. Our evaluation of synthetic and real data shows that un
Style APA, Harvard, Vancouver, ISO itp.
14

Wang, Ke, Xin Huang, JunLan Chen, Chuan Cao, Zhoubing Xiong, and Long Chen. "Forward and Backward Visual Fusion Approach to Motion Estimation with High Robustness and Low Cost." Remote Sensing 11, no. 18 (2019): 2139. http://dx.doi.org/10.3390/rs11182139.

Pełny tekst źródła
Streszczenie:
We present a novel low-cost visual odometry method of estimating the ego-motion (self-motion) for ground vehicles by detecting the changes that motion induces on the images. Different from traditional localization methods that use differential global positioning system (GPS), precise inertial measurement unit (IMU) or 3D Lidar, the proposed method only leverage data from inexpensive visual sensors of forward and backward onboard cameras. Starting with the spatial-temporal synchronization, the scale factor of backward monocular visual odometry was estimated based on the MSE optimization method
Style APA, Harvard, Vancouver, ISO itp.
15

Zhang, Jiaxin, Wei Sui, Qian Zhang, Tao Chen, and Cong Yang. "Towards Accurate Ground Plane Normal Estimation from Ego-Motion." Sensors 22, no. 23 (2022): 9375. http://dx.doi.org/10.3390/s22239375.

Pełny tekst źródła
Streszczenie:
In this paper, we introduce a novel approach for ground plane normal estimation of wheeled vehicles. In practice, the ground plane is dynamically changed due to braking and unstable road surface. As a result, the vehicle pose, especially the pitch angle, is oscillating from subtle to obvious. Thus, estimating ground plane normal is meaningful since it can be encoded to improve the robustness of various autonomous driving tasks (e.g., 3D object detection, road surface reconstruction, and trajectory planning). Our proposed method only uses odometry as input and estimates accurate ground plane no
Style APA, Harvard, Vancouver, ISO itp.
16

Shariati, Armon, Christian Holz, and Sudipta Sinha. "Towards Privacy-Preserving Ego-Motion Estimation Using an Extremely Low-Resolution Camera." IEEE Robotics and Automation Letters 5, no. 2 (2020): 1223–30. http://dx.doi.org/10.1109/lra.2020.2967307.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Gandhi, Tarak, and Mohan Trivedi. "Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera." Machine Vision and Applications 16, no. 2 (2005): 85–95. http://dx.doi.org/10.1007/s00138-004-0168-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Wang, Zhongli, Litong Fan, and Baigen Cai. "A 3D Relative-Motion Context Constraint-Based MAP Solution for Multiple-Object Tracking Problems." Sensors 18, no. 7 (2018): 2363. http://dx.doi.org/10.3390/s18072363.

Pełny tekst źródła
Streszczenie:
Multi-object tracking (MOT), especially by using a moving monocular camera, is a very challenging task in the field of visual object tracking. To tackle this problem, the traditional tracking-by-detection-based method is heavily dependent on detection results. Occlusion and mis-detections will often lead to tracklets or drifting. In this paper, the tasks of MOT and camera motion estimation are formulated as finding a maximum a posteriori (MAP) solution of joint probability and synchronously solved in a unified framework. To improve performance, we incorporate the three-dimensional (3D) relativ
Style APA, Harvard, Vancouver, ISO itp.
19

Pandey, Tejas, Dexmont Pena, Jonathan Byrne, and David Moloney. "Leveraging Deep Learning for Visual Odometry Using Optical Flow." Sensors 21, no. 4 (2021): 1313. http://dx.doi.org/10.3390/s21041313.

Pełny tekst źródła
Streszczenie:
In this paper, we study deep learning approaches for monocular visual odometry (VO). Deep learning solutions have shown to be effective in VO applications, replacing the need for highly engineered steps, such as feature extraction and outlier rejection in a traditional pipeline. We propose a new architecture combining ego-motion estimation and sequence-based learning using deep neural networks. We estimate camera motion from optical flow using Convolutional Neural Networks (CNNs) and model the motion dynamics using Recurrent Neural Networks (RNNs). The network outputs the relative 6-DOF camera
Style APA, Harvard, Vancouver, ISO itp.
20

Xiong, Lu, Yongkun Wen, Yuyao Huang, Junqiao Zhao, and Wei Tian. "Joint Unsupervised Learning of Depth, Pose, Ground Normal Vector and Ground Segmentation by a Monocular Camera Sensor." Sensors 20, no. 13 (2020): 3737. http://dx.doi.org/10.3390/s20133737.

Pełny tekst źródła
Streszczenie:
We propose a completely unsupervised approach to simultaneously estimate scene depth, ego-pose, ground segmentation and ground normal vector from only monocular RGB video sequences. In our approach, estimation for different scene structures can mutually benefit each other by the joint optimization. Specifically, we use the mutual information loss to pre-train the ground segmentation network and before adding the corresponding self-learning label obtained by a geometric method. By using the static nature of the ground and its normal vector, the scene depth and ego-motion can be efficiently lear
Style APA, Harvard, Vancouver, ISO itp.
21

Ci, Wenyan, and Yingping Huang. "A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera." Sensors 16, no. 10 (2016): 1704. http://dx.doi.org/10.3390/s16101704.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Tian, Miao, Banglei Guan, Zhibin Xing, and Friedrich Fraundorfer. "Efficient Ego-Motion Estimation for Multi-Camera Systems With Decoupled Rotation and Translation." IEEE Access 8 (2020): 153804–14. http://dx.doi.org/10.1109/access.2020.3018225.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Yang, DongFang, FuChun Sun, ShiCheng Wang, and JinSheng Zhang. "Simultaneous estimation of ego-motion and vehicle distance by using a monocular camera." Science China Information Sciences 57, no. 5 (2013): 1–10. http://dx.doi.org/10.1007/s11432-013-4884-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Haggag, M., A. Moussa, and N. El-Sheimy. "HYBRID DEEP LEARNING APPROACH FOR VEHICLE’S RELATIVE ATTITUDE ESTIMATION USING MONOCULAR CAMERA." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-1/W1-2023 (December 5, 2023): 649–55. http://dx.doi.org/10.5194/isprs-annals-x-1-w1-2023-649-2023.

Pełny tekst źródła
Streszczenie:
Abstract. Relative pose estimation using a monocular camera is one of the most common approaches for aiding vehicle’s navigation. It involves determining the position and orientation of a vehicle relative to its surroundings using only a single camera. This can be achieved through four main steps: feature detection and matching, motion estimation, filtering and optimization, and scale estimation. Feature tracking involves detecting and tracking distinctive visual features in the environment, such as corners or edges, and using their relative motion to estimate the camera's movement. This appro
Style APA, Harvard, Vancouver, ISO itp.
25

Flögel, Daniel, Neel Pratik Bhatt, and Ehsan Hashemi. "Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots." Robotics 11, no. 4 (2022): 82. http://dx.doi.org/10.3390/robotics11040082.

Pełny tekst źródła
Streszczenie:
A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated states. The slip-aware localization framework includes: the visual thread to detect and track the robot in the stereo image through computationally efficient 3D point cloud generation using a region of interest; and the ego motion thread which uses a slip-aware odometry mechanism t
Style APA, Harvard, Vancouver, ISO itp.
26

Mueggler, Elias, Henri Rebecq, Guillermo Gallego, Tobi Delbruck, and Davide Scaramuzza. "The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM." International Journal of Robotics Research 36, no. 2 (2017): 142–49. http://dx.doi.org/10.1177/0278364917691115.

Pełny tekst źródła
Streszczenie:
New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightnes
Style APA, Harvard, Vancouver, ISO itp.
27

Cho, Jaechan, Yongchul Jung, Dong-Sun Kim, Seongjoo Lee, and Yunho Jung. "Moving Object Detection Based on Optical Flow Estimation and a Gaussian Mixture Model for Advanced Driver Assistance Systems." Sensors 19, no. 14 (2019): 3217. http://dx.doi.org/10.3390/s19143217.

Pełny tekst źródła
Streszczenie:
Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion in the image frames and makes it difficult to classify target objects and background. In this paper, we propose an efficient MOD algorithm that can cope with moving camera environments. In addition, we present a hardware design and implementation results for the real-time processing of the proposed algorithm. The propos
Style APA, Harvard, Vancouver, ISO itp.
28

Zhao, Baigan, Yingping Huang, Hongjian Wei, and Xing Hu. "Ego-Motion Estimation Using Recurrent Convolutional Neural Networks through Optical Flow Learning." Electronics 10, no. 3 (2021): 222. http://dx.doi.org/10.3390/electronics10030222.

Pełny tekst źródła
Streszczenie:
Visual odometry (VO) refers to incremental estimation of the motion state of an agent (e.g., vehicle and robot) by using image information, and is a key component of modern localization and navigation systems. Addressing the monocular VO problem, this paper presents a novel end-to-end network for estimation of camera ego-motion. The network learns the latent subspace of optical flow (OF) and models sequential dynamics so that the motion estimation is constrained by the relations between sequential images. We compute the OF field of consecutive images and extract the latent OF representation in
Style APA, Harvard, Vancouver, ISO itp.
29

Yuan, Cheng, Jizhou Lai, Pin Lyu, Peng Shi, Wei Zhao, and Kai Huang. "A Novel Fault-Tolerant Navigation and Positioning Method with Stereo-Camera/Micro Electro Mechanical Systems Inertial Measurement Unit (MEMS-IMU) in Hostile Environment." Micromachines 9, no. 12 (2018): 626. http://dx.doi.org/10.3390/mi9120626.

Pełny tekst źródła
Streszczenie:
Visual odometry (VO) is a new navigation and positioning method that estimates the ego-motion of vehicles from images. However, VO with unsatisfactory performance can fail severely in hostile environment because of the less feature, fast angular motions, or illumination change. Thus, enhancing the robustness of VO in hostile environment has become a popular research topic. In this paper, a novel fault-tolerant visual-inertial odometry (VIO) navigation and positioning method framework is presented. The micro electro mechanical systems inertial measurement unit (MEMS-IMU) is used to aid the ster
Style APA, Harvard, Vancouver, ISO itp.
30

Zhao, Baigan, Yingping Huang, Wenyan Ci, and Xing Hu. "Unsupervised Learning of Monocular Depth and Ego-Motion with Optical Flow Features and Multiple Constraints." Sensors 22, no. 4 (2022): 1383. http://dx.doi.org/10.3390/s22041383.

Pełny tekst źródła
Streszczenie:
This paper proposes a novel unsupervised learning framework for depth recovery and camera ego-motion estimation from monocular video. The framework exploits the optical flow (OF) property to jointly train the depth and the ego-motion models. Unlike the existing unsupervised methods, our method extracts the features from the optical flow rather than from the raw RGB images, thereby enhancing unsupervised learning. In addition, we exploit the forward-backward consistency check of the optical flow to generate a mask of the invalid region in the image, and accordingly, eliminate the outlier region
Style APA, Harvard, Vancouver, ISO itp.
31

Cutolo, Fabrizio, Virginia Mamone, Nicola Carbonaro, Vincenzo Ferrari, and Alessandro Tognetti. "Ambiguity-Free Optical–Inertial Tracking for Augmented Reality Headsets." Sensors 20, no. 5 (2020): 1444. http://dx.doi.org/10.3390/s20051444.

Pełny tekst źródła
Streszczenie:
The increasing capability of computing power and mobile graphics has made possible the release of self-contained augmented reality (AR) headsets featuring efficient head-anchored tracking solutions. Ego motion estimation based on well-established infrared tracking of markers ensures sufficient accuracy and robustness. Unfortunately, wearable visible-light stereo cameras with short baseline and operating under uncontrolled lighting conditions suffer from tracking failures and ambiguities in pose estimation. To improve the accuracy of optical self-tracking and its resiliency to marker occlusions
Style APA, Harvard, Vancouver, ISO itp.
32

Duan, Chao, Steffen Junginger, Kerstin Thurow, and Hui Liu. "StereoVO: Learning Stereo Visual Odometry Approach Based on Optical Flow and Depth Information." Applied Sciences 13, no. 10 (2023): 5842. http://dx.doi.org/10.3390/app13105842.

Pełny tekst źródła
Streszczenie:
We present a novel stereo visual odometry (VO) model that utilizes both optical flow and depth information. While some existing monocular VO methods demonstrate superior performance, they require extra frames or information to initialize the model in order to obtain absolute scale, and they do not take into account moving objects. To address these issues, we have combined optical flow and depth information to estimate ego-motion and proposed a framework for stereo VO using deep neural networks. The model simultaneously generates optical flow and depth information outputs from sequential stereo
Style APA, Harvard, Vancouver, ISO itp.
33

Kim, Pyojin, Jungha Kim, Minkyeong Song, Yeoeun Lee, Moonkyeong Jung, and Hyeong-Geun Kim. "A Benchmark Comparison of Four Off-the-Shelf Proprietary Visual–Inertial Odometry Systems." Sensors 22, no. 24 (2022): 9873. http://dx.doi.org/10.3390/s22249873.

Pełny tekst źródła
Streszczenie:
Commercial visual–inertial odometry (VIO) systems have been gaining attention as cost-effective, off-the-shelf, six-degree-of-freedom (6-DoF) ego-motion-tracking sensors for estimating accurate and consistent camera pose data, in addition to their ability to operate without external localization from motion capture or global positioning systems. It is unclear from existing results, however, which commercial VIO platforms are the most stable, consistent, and accurate in terms of state estimation for indoor and outdoor robotic applications. We assessed four popular proprietary VIO systems (Apple
Style APA, Harvard, Vancouver, ISO itp.
34

Lee, Seokju, Sunghoon Im, Stephen Lin, and In So Kweon. "Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (2021): 1863–72. http://dx.doi.org/10.1609/aaai.v35i3.16281.

Pełny tekst źródła
Streszczenie:
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion, and depth in a monocular camera setup without supervision. Our technical contributions are three-fold. First, we highlight the fundamental difference between inverse and forward projection while modeling the individual motion of each rigid object, and propose a geometrically correct projection pipeline using a neural forward projection module. Second, we design a unified instance-aware photometric and geometric consistency loss that holistically imposes self-supervisor
Style APA, Harvard, Vancouver, ISO itp.
35

Araar, Oualid, Nabil Aouf, and Jose Luis Vallejo Dietz. "Power pylon detection and monocular depth estimation from inspection UAVs." Industrial Robot: An International Journal 42, no. 3 (2015): 200–213. http://dx.doi.org/10.1108/ir-11-2014-0419.

Pełny tekst źródła
Streszczenie:
Purpose This paper aims to present a new vision-based approach for both the identification and the estimation of the relative distance between the unmanned aerial vehicle (UAV) and power pylon. Autonomous power line inspection using small UAVs, has been the focus of many research works over the past couple of decades. Automatic detection of power pylons is a primary requirement to achieve such autonomous systems. It is still a challenging task due to the complex geometry and cluttered background of these structures. Design/methodology/approach The identification solution proposed, avoids the c
Style APA, Harvard, Vancouver, ISO itp.
36

Tibebu, Haileleol, Varuna De-Silva, Corentin Artaud, Rafael Pina, and Xiyu Shi. "Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation." Sensors 22, no. 20 (2022): 8021. http://dx.doi.org/10.3390/s22208021.

Pełny tekst źródła
Streszczenie:
Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2 using multiple sensors, including camera, light detecting and ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser s
Style APA, Harvard, Vancouver, ISO itp.
37

Wang, Zhe, Xisheng Li, Xiaojuan Zhang, Yanru Bai, and Chengcai Zheng. "Blind image deblurring for a close scene under a 6-DOF motion path." Sensor Review 41, no. 2 (2021): 216–26. http://dx.doi.org/10.1108/sr-06-2020-0143.

Pełny tekst źródła
Streszczenie:
Purpose How to model blind image deblurring that arises when a camera undergoes ego-motion while observing a static and close scene. In particular, this paper aims to detail how the blurry image can be restored under a sequence of the linear model of the point spread function (PSF) that are derived from the 6-degree of freedom (DOF) camera’s accurate path during the long exposure time. Design/methodology/approach There are two existing techniques, namely, an estimation of the PSF and a blind image deconvolution. Based on online and short-period inertial measurement unit (IMU) self-calibration,
Style APA, Harvard, Vancouver, ISO itp.
38

Durant, Szonya, and Johannes M. Zanker. "Variation in the Local Motion Statistics of Real-Life Optic Flow Scenes." Neural Computation 24, no. 7 (2012): 1781–805. http://dx.doi.org/10.1162/neco_a_00294.

Pełny tekst źródła
Streszczenie:
Optic flow motion patterns can be a rich source of information about our own movement and about the structure of the environment we are moving in. We investigate the information available to the brain under real operating conditions by analyzing video sequences generated by physically moving a camera through various typical human environments. We consider to what extent the motion signal maps generated by a biologically plausible, two-dimensional array of correlation-based motion detectors (2DMD) not only depend on egomotion, but also reflect the spatial setup of such environments. We analyzed
Style APA, Harvard, Vancouver, ISO itp.
39

Song, Moonhyung, and Dongho Shin. "A study on ego-motion estimation based on stereo camera sensor and 2G1Y inertial sensor with considering vehicle dynamics." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 233, no. 8 (2018): 2174–86. http://dx.doi.org/10.1177/0954407018776429.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Liu, Hailin, Liangfang Tian, Qiliang Du, and Wenjie Xu. "Robust RGB: D-SLAM in highly dynamic environments based on probability observations and clustering optimization." Measurement Science and Technology 35, no. 3 (2023): 035405. http://dx.doi.org/10.1088/1361-6501/ad0afd.

Pełny tekst źródła
Streszczenie:
Abstract Visual simultaneous localization and mapping (SLAM) is the underlying support of unmanned systems. Currently, most visual SLAM methods are based on the static environment assumption so that dynamic objects in the camera’s field of view will seriously disrupt its working performance. In view of this, an RGB-D SLAM approach based on probability observations and clustering optimization for highly dynamic environments is proposed, which can effectively eliminate the influence of dynamic objects and accurately estimate the ego-motion of an RGB-D camera. The method contains a dual static ma
Style APA, Harvard, Vancouver, ISO itp.
41

Nam, Dinh Van, and Kim Gon-Woo. "Robust Stereo Visual Inertial Navigation System Based on Multi-Stage Outlier Removal in Dynamic Environments." Sensors 20, no. 10 (2020): 2922. http://dx.doi.org/10.3390/s20102922.

Pełny tekst źródła
Streszczenie:
Robotic mapping and odometry are the primary competencies of a navigation system for an autonomous mobile robot. However, the state estimation of the robot typically mixes with a drift over time, and its accuracy is degraded critically when using only proprioceptive sensors in indoor environments. Besides, the accuracy of an ego-motion estimated state is severely diminished in dynamic environments because of the influences of both the dynamic objects and light reflection. To this end, the multi-sensor fusion technique is employed to bound the navigation error by adopting the complementary natu
Style APA, Harvard, Vancouver, ISO itp.
42

Uhm, Taeyoung, Minsoo Ryu, and Jong-Il Park. "Fine-Motion Estimation Using Ego/Exo-Cameras." ETRI Journal 37, no. 4 (2015): 766–71. http://dx.doi.org/10.4218/etrij.15.0114.0525.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Chen, Yong-Sheng, Lin-Gwo Liou, Yi-Ping Hung, and Chiou-Shann Fuh. "Three-dimensional ego-motion estimation from motion fields observed with multiple cameras." Pattern Recognition 34, no. 8 (2001): 1573–83. http://dx.doi.org/10.1016/s0031-3203(00)00092-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Dimiccoli, Mariella, and Petia Radeva. "Visual Lifelogging in the Era of Outstanding Digitization." Digital Presentation and Preservation of Cultural and Scientific Heritage 5 (September 30, 2015): 59–64. http://dx.doi.org/10.55630/dipp.2015.5.4.

Pełny tekst źródła
Streszczenie:
In this paper, we give an overview on the emerging trend of the digitized self, focusing on visual lifelogging through wearable cameras. This is about continuously recording our life from a first-person view by wearing a camera that passively captures images. On one hand, visual lifelogging has opened the door to a large number of applications, including health. On the other, it has also boosted new challenges in the field of data analysis as well as new ethical concerns. While currently increasing efforts are being devoted to exploit lifelogging data for the improvement of personal well-being
Style APA, Harvard, Vancouver, ISO itp.
45

Yin, Hongpei, Peter Xiaoping Liu, and Minhua Zheng. "Ego-Motion Estimation With Stereo Cameras Using Efficient 3D–2D Edge Correspondences." IEEE Transactions on Instrumentation and Measurement 71 (2022): 1–11. http://dx.doi.org/10.1109/tim.2022.3198489.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Droeschel, David, Stefan May, Dirk Holz, and Sven Behnke. "Fusing Time-of-Flight Cameras and Inertial Measurement Units for Ego-Motion Estimation." Automatika 52, no. 3 (2011): 189–98. http://dx.doi.org/10.1080/00051144.2011.11828419.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Vujasinović, Stéphane, Stefan Becker, Timo Breuer, Sebastian Bullinger, Norbert Scherer-Negenborn, and Michael Arens. "Integration of the 3D Environment for UAV Onboard Visual Object Tracking." Applied Sciences 10, no. 21 (2020): 7622. http://dx.doi.org/10.3390/app10217622.

Pełny tekst źródła
Streszczenie:
Single visual object tracking from an unmanned aerial vehicle (UAV) poses fundamental challenges such as object occlusion, small-scale objects, background clutter, and abrupt camera motion. To tackle these difficulties, we propose to integrate the 3D structure of the observed scene into a detection-by-tracking algorithm. We introduce a pipeline that combines a model-free visual object tracker, a sparse 3D reconstruction, and a state estimator. The 3D reconstruction of the scene is computed with an image-based Structure-from-Motion (SfM) component that enables us to leverage a state estimator i
Style APA, Harvard, Vancouver, ISO itp.
48

Zhang, Yongcong, Bangyan Liao, Delin Qu, et al. "Ego-motion Estimation for Vehicles with a Rolling Shutter Camera." IEEE Transactions on Intelligent Vehicles, 2024, 1–13. http://dx.doi.org/10.1109/tiv.2024.3436703.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Gilles, Maximilian, and Sascha Ibrahimpasic. "Unsupervised deep learning based ego motion estimation with a downward facing camera." Visual Computer, November 27, 2021. http://dx.doi.org/10.1007/s00371-021-02345-6.

Pełny tekst źródła
Streszczenie:
AbstractKnowing the robot's pose is a crucial prerequisite for mobile robot tasks such as collision avoidance or autonomous navigation. Using powerful predictive models to estimate transformations for visual odometry via downward facing cameras is an understudied area of research. This work proposes a novel approach based on deep learning for estimating ego motion with a downward looking camera. The network can be trained completely unsupervised and is not restricted to a specific motion model. We propose two neural network architectures based on the Early Fusion and Slow Fusion design princip
Style APA, Harvard, Vancouver, ISO itp.
50

Zhou, Wenhui, Hua Zhang, Zhengmao Yan, Weisheng Wang, and Lili Lin. "DecoupledPoseNet: Cascade Decoupled Pose Learning for Unsupervised Camera Ego-motion Estimation." IEEE Transactions on Multimedia, 2022, 1. http://dx.doi.org/10.1109/tmm.2022.3144958.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!