To see the other types of publications on this topic, follow the link: Camera pose.

Journal articles on the topic 'Camera pose'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Camera pose.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sudars, K., R. Cacurs, I. Homjakovs, and J. Judvaitis. "LEDs based video camera pose estimation." Bulletin of the Polish Academy of Sciences Technical Sciences 63, no. 4 (2015): 897–905. http://dx.doi.org/10.1515/bpasts-2015-0102.

Full text
Abstract:
Abstract For 3D object localization and tracking with multiple cameras the camera poses have to be known within a high precision. The paper evaluates camera pose estimation via a fundamental matrix and via the known object in environment of multiple static cameras. A special feature point extraction technique based on LED (Light Emitting Diodes) point detection and matching has been developed for this purpose. LED point detection has been solved searching local maximums in images and LED point matching has been solved involving patterned time functions for each light source. Emitting LEDs have
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Zhe, Chunyu Wang, and Wenhu Qin. "Semantically Synchronizing Multiple-Camera Systems with Human Pose Estimation." Sensors 21, no. 7 (2021): 2464. http://dx.doi.org/10.3390/s21072464.

Full text
Abstract:
Multiple-camera systems can expand coverage and mitigate occlusion problems. However, temporal synchronization remains a problem for budget cameras and capture devices. We propose an out-of-the-box framework to temporally synchronize multiple cameras using semantic human pose estimation from the videos. Human pose predictions are obtained with an out-of-the-shelf pose estimator for each camera. Our method firstly calibrates each pair of cameras by minimizing an energy function related to epipolar distances. We also propose a simple yet effective multiple-person association algorithm across cam
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Jing, Jing Xu, Fangwei Zhong, Xiangyu Kong, Yu Qiao, and Yizhou Wang. "Pose-Assisted Multi-Camera Collaboration for Active Object Tracking." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 759–66. http://dx.doi.org/10.1609/aaai.v34i01.5419.

Full text
Abstract:
Active Object Tracking (AOT) is crucial to many vision-based applications, e.g., mobile robot, intelligent surveillance. However, there are a number of challenges when deploying active tracking in complex scenarios, e.g., target is frequently occluded by obstacles. In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion. To achieve effective collaboration among cameras, we propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for ac
APA, Harvard, Vancouver, ISO, and other styles
4

Chino, Masaki, Junwoon Lee, Qi An, and Atsushi Yamashita. "Robot Localization by Data Integration of Multiple Thermal Cameras in Low-Light Environment." International Journal of Automation Technology 19, no. 4 (2025): 566–74. https://doi.org/10.20965/ijat.2025.p0566.

Full text
Abstract:
A method is proposed for interpolating pose information by integrating data from multiple thermal cameras when a global navigation satellite system temporarily experiences a decrease in accuracy. When temperature information obtained from thermal cameras is visualized, a two-stage temperature range restriction is applied to focus only on areas with temperature variations, making conversion into clearer images possible. To compensate for the narrow field of view of thermal cameras, multiple thermal cameras are oriented in different directions. Pose estimation is performed with each camera, and
APA, Harvard, Vancouver, ISO, and other styles
5

Minami, Mamoru, and Wei Song. "Hand-Eye Motion-Invariant Pose Estimation with Online 1-Step GA -3D Pose Tracking Accuracy Evaluation in Dynamic Hand-Eye Oscillation-." Journal of Robotics and Mechatronics 21, no. 6 (2009): 709–19. http://dx.doi.org/10.20965/jrm.2009.p0709.

Full text
Abstract:
This paper presents online pose measurement for a 3-dimensional (3-D) object detected by stereo hand-eye cameras. Our proposal improves 3-D pose tracking accuracy by compensating for the fictional motion of the target in camera images stemming from the ego motion of the hand-eye camera caused by dynamic manipulator oscillation. This motion feed-forward (MFF) is combined into the evolutionary search of a genetic algorithm (GA) and fitness evaluation based on stereo model matching whose pose is expressed using a unit quaternion. The proposal’s effectiveness was confirmed in simulation tracking a
APA, Harvard, Vancouver, ISO, and other styles
6

Kaichi, Tomoya, Tsubasa Maruyama, Mitsunori Tada, and Hideo Saito. "Resolving Position Ambiguity of IMU-Based Human Pose with a Single RGB Camera." Sensors 20, no. 19 (2020): 5453. http://dx.doi.org/10.3390/s20195453.

Full text
Abstract:
Human motion capture (MoCap) plays a key role in healthcare and human–robot collaboration. Some researchers have combined orientation measurements from inertial measurement units (IMUs) and positional inference from cameras to reconstruct the 3D human motion. Their works utilize multiple cameras or depth sensors to localize the human in three dimensions. Such multiple cameras are not always available in our daily life, but just a single camera attached in a smart IP devices has recently been popular. Therefore, we present a 3D pose estimation approach from IMUs and a single camera. In order to
APA, Harvard, Vancouver, ISO, and other styles
7

Tang, Shengjun, Weixi Wang, Xiaoming Li, and Zhilu Yuan. "Stereo RGB-D indoor mapping with precise stream fusing strategy." Abstracts of the ICA 1 (July 15, 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-362-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> In order to achieve more robust pose tracking and mapping of visual SLAM, the robotics researcher has recently shown a growing interest in utilising multiple camera, which is able to provide more sufficient observations to fulfil the frame registration and map updating tasks. This implies that better pose tracking robustness can be achieved by extending monocular visual SLAM to utilise measurements from multiple cameras.[1] proposed a visual SLAM method using multiple RGB-D cameras, which integrate the observations from multi-camera for camera tr
APA, Harvard, Vancouver, ISO, and other styles
8

Oral, Burhan Burak, Alptuğ Çakıcı, and Arman Savran. "Evaluation of Convolutional Networks for Event Camera Face Pose Alignment." Academic Platform Journal of Engineering and Smart Systems 13, no. 2 (2025): 22–30. https://doi.org/10.21541/apjess.1417068.

Full text
Abstract:
Event camera offers substantial advantages over conventional video cameras with their efficiency, extremely high temporal resolutions, low latency, and high dynamic range. These benefits have led to applications in various vision domains. Recently they have been applied in facial recognition tasks as well. However, while significant advantages of event cameras in some facial processing tasks have been demonstrated, the initial stage in almost any task, i.e., face alignment, is not at par with the conventional cameras. This study investigates the use of face alignment convolutional networks reg
APA, Harvard, Vancouver, ISO, and other styles
9

Eichler, Nadav, Hagit Hel-Or, and Ilan Shimshoni. "Spatio-Temporal Calibration of Multiple Kinect Cameras Using 3D Human Pose." Sensors 22, no. 22 (2022): 8900. http://dx.doi.org/10.3390/s22228900.

Full text
Abstract:
RGB and depth cameras are extensively used for the 3D tracking of human pose and motion. Typically, these cameras calculate a set of 3D points representing the human body as a skeletal structure. The tracking capabilities of a single camera are often affected by noise and inaccuracies due to occluded body parts. Multiple-camera setups offer a solution to maximize coverage of the captured human body and to minimize occlusions. According to best practices, fusing information across multiple cameras typically requires spatio-temporal calibration. First, the cameras must synchronize their internal
APA, Harvard, Vancouver, ISO, and other styles
10

Shen, Siyuan, Guanfeng Yu, Lei Zhang, Youyu Yan, and Zhengjun Zhai. "LandNet: Combine CNN and Transformer to Learn Absolute Camera Pose for the Fixed-Wing Aircraft Approach and Landing." Remote Sensing 17, no. 4 (2025): 653. https://doi.org/10.3390/rs17040653.

Full text
Abstract:
Camera localization approaches often degrade in challenging environments characterized by illumination variations and significant viewpoint changes, presenting critical limitations for fixed-wing aircraft landing applications. To address these challenges, we propose LandNet—a novel absolute camera pose estimation network specifically designed for airborne scenarios. Our framework processes images from forward-looking aircraft cameras to directly predict 6-DoF camera poses, subsequently enabling aircraft pose determination through rigid transformation. As a first step, we design two encoders fr
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Xiao Zhou, Xiao Qian Chen, and Xin Song. "Study on Camera Pose Estimation." Applied Mechanics and Materials 631-632 (September 2014): 462–69. http://dx.doi.org/10.4028/www.scientific.net/amm.631-632.462.

Full text
Abstract:
A common need in photogrammetry, robotics and computer vision is performing camera pose estimation. A comparative analysis is presented here for three classical and representative algorithms, including direct linear transform (DLT), EPNP and Cayley method, each of which computes the translation and rotation matrix using non-iterative method with six or more point correspondences. The comparison shows qualitative and quantitative experimental results to determine (1) the accuracy and robustness under the influence of different levels of noise, (2) the accuracy, robustness and efficiency for dif
APA, Harvard, Vancouver, ISO, and other styles
12

Albl, Cenek, Zuzana Kukelova, Viktor Larsson, and Tomas Pajdla. "Rolling Shutter Camera Absolute Pose." IEEE Transactions on Pattern Analysis and Machine Intelligence 42, no. 6 (2020): 1439–52. http://dx.doi.org/10.1109/tpami.2019.2894395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Muñoz-Salinas, Rafael, E. Yeguas-Bolivar, A. Saffiotti, and R. Medina-Carnicer. "Multi-camera head pose estimation." Machine Vision and Applications 23, no. 3 (2012): 479–90. http://dx.doi.org/10.1007/s00138-012-0410-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jorstad, Anne, Daniel DeMenthon, I.-Jeng Wang, and Philippe Burlina. "Distributed Consensus on Camera Pose." IEEE Transactions on Image Processing 19, no. 9 (2010): 2396–407. http://dx.doi.org/10.1109/tip.2010.2047167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hu, Liang, Dianqi Sun, Huixian Duan, An Shu, Shanshan Zhou, and Haodong Pei. "Non-Cooperative Spacecraft Pose Measurement with Binocular Camera and TOF Camera Collaboration." Applied Sciences 13, no. 3 (2023): 1420. http://dx.doi.org/10.3390/app13031420.

Full text
Abstract:
Non-cooperative spacecraft pose acquisition is a challenge in on-orbit service (OOS), especially for targets with unknown structures. A method for the pose measurement of non-cooperative spacecrafts based on the collaboration of binocular and time-of-flight (TOF) cameras is proposed in this study. The joint calibration is carried out to obtain the transformation matrix from the left camera coordinate system to the TOF camera system. The initial pose acquisition is mainly divided into feature point association and relative motion estimation. The initial value and key point information generated
APA, Harvard, Vancouver, ISO, and other styles
16

ABAYOMI-ALLI, A., E. O. OMIDIORA, S. O. OLABIYISI, J. A. Ojo, and A. Y. AKINGBOYE. "BLACKFACE SURVEILLANCE CAMERA DATABASE FOR EVALUATING FACE RECOGNITION IN LOW QUALITY SCENARIOS." Journal of Natural Sciences Engineering and Technology 15, no. 2 (2017): 13–31. http://dx.doi.org/10.51406/jnset.v15i2.1668.

Full text
Abstract:
Many face recognition algorithms perform poorly in real life surveillance scenarios because they were tested with datasets that are already biased with high quality images and certain ethnic or racial types. In this paper a black face surveillance camera (BFSC) database was described, which was collected from four low quality cameras and a professional camera. There were fifty (50) random volunteers and 2,850 images were collected for the frontal mugshot, surveillance (visible light), surveillance (IR night vision), and pose variations datasets, respectively. Images were taken at distance 3.4,
APA, Harvard, Vancouver, ISO, and other styles
17

Lei, Wentai, Mengdi Xu, Feifei Hou, et al. "Calibration Venus: An Interactive Camera Calibration Method Based on Search Algorithm and Pose Decomposition." Electronics 9, no. 12 (2020): 2170. http://dx.doi.org/10.3390/electronics9122170.

Full text
Abstract:
Cameras are widely used in many scenes such as robot positioning and unmanned driving, in which the camera calibration is a major task in this field. The interactive camera calibration method based on a plane board is becoming popular due to its stability and handleability. However, most methods choose suggestions subjectively from a fixed pose dataset, which is error-prone and limited for different camera models. In addition, these methods do not provide clear guidelines on how to place the board in the specified pose. This paper proposes a new interactive calibration method, named ‘Calibrati
APA, Harvard, Vancouver, ISO, and other styles
18

Elashry, A., and C. Toth. "IMPROVING CAMERA POSE ESTIMATION USING SWARM PARTICLE ALGORITHMS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-M-3-2023 (September 5, 2023): 87–93. http://dx.doi.org/10.5194/isprs-archives-xlviii-m-3-2023-87-2023.

Full text
Abstract:
Abstract. Most computer vision and photogrammetry applications rely on accurately estimating the camera pose, such as visual navigation, motion tracking, stereo photogrammetry, and structure from motion. The Essential matrix is a well-known model in computer vision that provides information about the relative orientation between two images, including the rotation and translation, for calibrated cameras with a known camera matrix. To estimate the Essential matrix, the camera calibration matrices, which include focal length and principal point location must be known, and the estimation process t
APA, Harvard, Vancouver, ISO, and other styles
19

Fu, Qiang, Xiang-Yang Chen, and Wei He. "A Survey on 3D Visual Tracking of Multicopters." International Journal of Automation and Computing 16, no. 6 (2019): 707–19. http://dx.doi.org/10.1007/s11633-019-1199-2.

Full text
Abstract:
Abstract Three-dimensional (3D) visual tracking of a multicopter (where the camera is fixed while the multicopter is moving) means continuously recovering the six-degree-of-freedom pose of the multicopter relative to the camera. It can be used in many applications, such as precision terminal guidance and control algorithm validation for multicopters. However, it is difficult for many researchers to build a 3D visual tracking system for multicopters (VTSMs) by using cheap and off-the-shelf cameras. This paper firstly gives an over- view of the three key technologies of a 3D VTSMs: multi-camera
APA, Harvard, Vancouver, ISO, and other styles
20

Park, Byung-Seo, Woosuk Kim, Jin-Kyum Kim, Eui Seok Hwang, Dong-Wook Kim, and Young-Ho Seo. "3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview." Sensors 22, no. 3 (2022): 1097. http://dx.doi.org/10.3390/s22031097.

Full text
Abstract:
This paper proposes a new technique for performing 3D static-point cloud registration after calibrating a multi-view RGB-D camera using a 3D (dimensional) joint set. Consistent feature points are required to calibrate a multi-view camera, and accurate feature points are necessary to obtain high-accuracy calibration results. In general, a special tool, such as a chessboard, is used to calibrate a multi-view camera. However, this paper uses joints on a human skeleton as feature points for calibrating a multi-view camera to perform calibration efficiently without special tools. We propose an RGB-
APA, Harvard, Vancouver, ISO, and other styles
21

Luo, Hao, Wenjie Luo, and Wenzhu Yang. "Camera Pose Generation Based on Unity3D." Information 16, no. 4 (2025): 315. https://doi.org/10.3390/info16040315.

Full text
Abstract:
Deep learning models performing complex tasks require the support of datasets. With the advancement of virtual reality technology, the use of virtual datasets in deep learning models is becoming more and more widespread. Indoor scenes represents a significant area of interest for the application of machine vision technologies. Existing virtual indoor datasets exhibit deficiencies with regard to camera poses, resulting in problems such as occlusion, object omission, and objects having too small of a proportion of the image, and perform poorly in the training for object detection and simultaneou
APA, Harvard, Vancouver, ISO, and other styles
22

García-Ruiz, Pablo, Francisco J. Romero-Ramirez, Rafael Muñoz-Salinas, Manuel J. Marín-Jiménez, and Rafael Medina-Carnicer. "Sparse Indoor Camera Positioning with Fiducial Markers." Applied Sciences 15, no. 4 (2025): 1855. https://doi.org/10.3390/app15041855.

Full text
Abstract:
Accurately estimating the pose of large arrays of fixed indoor cameras presents a significant challenge in computer vision, especially since traditional methods predominantly rely on overlapping camera views. Existing approaches for positioning non-overlapping cameras are scarce and generally limited to simplistic scenarios dependent on specific environmental features, thereby leaving a significant gap in applications for large and complex settings. To bridge this gap, this paper introduces a novel methodology that effectively positions cameras with and without overlapping views in complex ind
APA, Harvard, Vancouver, ISO, and other styles
23

Shalimova, E. A., E. V. Shalnov, and A. S. Konushin. "Camera parameters estimation from pose detections." Computer Optics 44, no. 3 (2020): 385–92. http://dx.doi.org/10.18287/2412-6179-co-600.

Full text
Abstract:
Some computer vision tasks become easier with known camera calibration. We propose a method for camera focal length, location and orientation estimation by observing human poses in the scene. Weak requirements to the observed scene make the method applicable to a wide range of scenarios. Our evaluation shows that even being trained only on synthetic dataset, the proposed method outperforms known solution. Our experiments show that using only human poses as the input also allows the proposed method to calibrate dynamic visual sensors.
APA, Harvard, Vancouver, ISO, and other styles
24

Jung, Ho Gi, and Jae Kyu Suhr. "Lane Detection-based Camera Pose Estimation." Transactions of the Korean Society of Automotive Engineers 23, no. 5 (2015): 463–70. http://dx.doi.org/10.7467/ksae.2015.23.5.463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Singhirunnusorn, Khomsun, Farbod Fahimi, and Ramazan Aygun. "Single‐camera pose estimation using mirage." IET Computer Vision 12, no. 5 (2018): 720–27. http://dx.doi.org/10.1049/iet-cvi.2017.0407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Long Quan and Zhongdan Lan. "Linear N-point camera pose determination." IEEE Transactions on Pattern Analysis and Machine Intelligence 21, no. 8 (1999): 774–80. http://dx.doi.org/10.1109/34.784291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Ju-Young, In-Seon Kim, Dai-Yeol Yun, Tae-Won Jung, Soon-Chul Kwon, and Kye-Dong Jung. "Visual Positioning System Based on 6D Object Pose Estimation Using Mobile Web." Electronics 11, no. 6 (2022): 865. http://dx.doi.org/10.3390/electronics11060865.

Full text
Abstract:
Recently, the demand for location-based services using mobile devices in indoor spaces without a global positioning system (GPS) has increased. However, to the best of our knowledge, solutions that are fully applicable to indoor positioning and navigation and ensure real-time mobility on mobile devices, such as global navigation satellite system (GNSS) solutions, cannot achieve remarkable researches in indoor circumstances. Indoor single-shot image positioning using smartphone cameras does not require a dedicated infrastructure and offers the advantages of low price and large potential markets
APA, Harvard, Vancouver, ISO, and other styles
28

SMIRNOV, A. O. "Camera Pose Estimation Using a 3D Gaussian Splatting Radiance Field." Kibernetika i vyčislitelʹnaâ tehnika 216, no. 2(216) (2024): 15–25. http://dx.doi.org/10.15407/kvt216.02.015.

Full text
Abstract:
Introduction. Accurate camera pose estimation is crucial for many applications ranging from robotics to virtual and augmented reality. The process of determining agents pose from a set of observations is called odometry. This work focuses on visual odometry, which utilizes only images from camera as the input data. The purpose of the paper is to demonstrate an approach for small-scale camera pose estimation using 3D Gaussians as the environment representation. Methods. Given the rise of neural volumetric representations for the environment reconstruction, this work relies on Gaussian Splatting
APA, Harvard, Vancouver, ISO, and other styles
29

Zhou, Yu, Wenfei Liu, Xionghui Lu, and Xu Zhong. "Single-Camera Trilateration." Applied Sciences 9, no. 24 (2019): 5374. http://dx.doi.org/10.3390/app9245374.

Full text
Abstract:
This paper presents a single-camera trilateration scheme which estimates the instantaneous 3D pose of a regular forward-looking camera from a single image of landmarks at known positions. Derived on the basis of the classical pinhole camera model and principles of perspective geometry, the proposed algorithm estimates the camera position and orientation successively. It provides a convenient self-localization tool for mobile robots and vehicles equipped with onboard cameras. Performance analysis has been conducted through extensive simulations with representative examples, which provides an in
APA, Harvard, Vancouver, ISO, and other styles
30

Dang, Chang Gwon, Seung Soo Lee, Mahboob Alam, et al. "Korean Cattle 3D Reconstruction from Multi-View 3D-Camera System in Real Environment." Sensors 24, no. 2 (2024): 427. http://dx.doi.org/10.3390/s24020427.

Full text
Abstract:
The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow growth. In this paper, a system of cameras is built to synchronously capture 3D data and then reconstruct a 3D mesh representation. In general, to reconstruct non-rigid objects, a system of cameras is synchronized and calibrated, and then the data of each camera are transformed to global coordinates. However, when
APA, Harvard, Vancouver, ISO, and other styles
31

Dinc, Semih, Farbod Fahimi, and Ramazan Aygun. "Mirage: an O(n) time analytical solution to 3D camera pose estimation with multi-camera support." Robotica 35, no. 12 (2017): 2278–96. http://dx.doi.org/10.1017/s0263574716000874.

Full text
Abstract:
SUMMARYMirage is a camera pose estimation method that analytically solves pose parameters in linear time for multi-camera systems. It utilizes a reference camera pose to calculate the pose by minimizing the 2D projection error between reference and actual pixel coordinates. Previously, Mirage has been successfully applied to trajectory tracking (visual servoing) problem. In this study, a comprehensive evaluation of Mirage is performed by particularly focusing on the area of camera pose estimation. Experiments have been performed using simulated and real data on noisy and noise-free environment
APA, Harvard, Vancouver, ISO, and other styles
32

Herdiansyah, Junardo, Febi Ariefka Septian Putra, and Dwi Septiyanto. "Implementation of Zhang's Camera Calibration Algorithm on a Single Camera for Accurate Pose Estimation Using ArUco Markers." Journal of Fuzzy Systems and Control 2, no. 3 (2024): 176–88. https://doi.org/10.59247/jfsc.v2i3.256.

Full text
Abstract:
Pose estimation using ArUco markers is a method to estimate the position of ArUco markers relative to the camera lens. Accurate pose estimation is crucial for autonomous systems to navigate robots effectively. This study aims to achieve an ArUco Marker pose estimation accuracy of at least 95% using a single camera. The method employed to obtain accurate ArUco pose estimation results is by calibrating the camera with the Zhang camera calibration algorithm. This calibration is necessary to obtain the camera matrix and distortion coefficients, thereby enhancing the accuracy of the pose estimation
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Ruijin, Dapeng Chen, Tie Liu, Zhiliang Xiong, and Zejian Yuan. "Learning to Predict 3D Lane Shape and Camera Pose from a Single Image via Geometry Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 1765–72. http://dx.doi.org/10.1609/aaai.v36i2.20069.

Full text
Abstract:
Detecting 3D lanes from the camera is a rising problem for autonomous vehicles. In this task, the correct camera pose is the key to generating accurate lanes, which can transform an image from perspective-view to the top-view. With this transformation, we can get rid of the perspective effects so that 3D lanes would look similar and can accurately be fitted by low-order polynomials. However, mainstream 3D lane detectors rely on perfect camera poses provided by other sensors, which is expensive and encounters multi-sensor calibration issues. To overcome this problem, we propose to predict 3D la
APA, Harvard, Vancouver, ISO, and other styles
34

Oščádal, Petr, Dominik Heczko, Aleš Vysocký, et al. "Improved Pose Estimation of Aruco Tags Using a Novel 3D Placement Strategy." Sensors 20, no. 17 (2020): 4825. http://dx.doi.org/10.3390/s20174825.

Full text
Abstract:
This paper extends the topic of monocular pose estimation of an object using Aruco tags imaged by RGB cameras. The accuracy of the Open CV Camera calibration and Aruco pose estimation pipelines is tested in detail by performing standardized tests with multiple Intel Realsense D435 Cameras. Analyzing the results led to a way to significantly improve the performance of Aruco tag localization which involved designing a 3D Aruco board, which is a set of Aruco tags placed at an angle to each other, and developing a library to combine the pose data from the individual tags for both higher accuracy a
APA, Harvard, Vancouver, ISO, and other styles
35

Shtain, Z., and S. Filin. "TOWARDS AN ACCURATE LOW-COST STEREO-BASED NAVIGATION OF UNMANNED PLATFORMS IN GNSS-DENIED AREAS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 323–29. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-323-2019.

Full text
Abstract:
Abstract. While lightweight stereo vision sensors provide detailed and high-resolution information that allows robust and accurate localization, the computation demands required for such process is doubled compared to monocular sensors. In this paper, an alternative model for pose estimation of stereo sensors is introduced which provides an efficient and precise framework for investigating system configurations and maximize pose accuracies. Using the proposed formulation, we examine the parameters that affect accurate pose estimation and their magnitudes and show that for standard operational
APA, Harvard, Vancouver, ISO, and other styles
36

Gong, Xuanrui, Yaowen Lv, Xiping Xu, Yuxuan Wang, and Mengdi Li. "Pose Estimation of Omnidirectional Camera with Improved EPnP Algorithm." Sensors 21, no. 12 (2021): 4008. http://dx.doi.org/10.3390/s21124008.

Full text
Abstract:
The omnidirectional camera, having the advantage of broadening the field of view, realizes 360° imaging in the horizontal direction. Due to light reflection from the mirror surface, the collinearity relation is altered and the imaged scene has severe nonlinear distortions. This makes it more difficult to estimate the pose of the omnidirectional camera. To solve this problem, we derive the mapping from omnidirectional camera to traditional camera and propose an omnidirectional camera linear imaging model. Based on the linear imaging model, we improve the EPnP algorithm to calculate the omnidire
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Cong, Gilles Simon, John See, Marie-Odile Berger, and Wenyong Wang. "WatchPose: A View-Aware Approach for Camera Pose Data Collection in Industrial Environments." Sensors 20, no. 11 (2020): 3045. http://dx.doi.org/10.3390/s20113045.

Full text
Abstract:
Collecting correlated scene images and camera poses is an essential step towards learning absolute camera pose regression models. While the acquisition of such data in living environments is relatively easy by following regular roads and paths, it is still a challenging task in constricted industrial environments. This is because industrial objects have varied sizes and inspections are usually carried out with non-constant motions. As a result, regression models are more sensitive to scene images with respect to viewpoints and distances. Motivated by this, we present a simple but efficient cam
APA, Harvard, Vancouver, ISO, and other styles
38

Guo, Shasha, Jing Xu, Ming Fang, and Ying Tian. "Relative Pose of IMU-Camera Calibration Based on BP Network." Modern Applied Science 11, no. 10 (2017): 15. http://dx.doi.org/10.5539/mas.v11n10p15.

Full text
Abstract:
There are many applications of the combination of IMU (Inertial Measurements Unit) and camera in fields of electronic image stabilization, enhancement reality and navigation where camera-IMU relative pose calibration is one of the key technologies, which may effectively avoid the cases of insufficient feature points, unclear texture, blurred image, etc. In this paper, a new camera-IMU relative pose calibration method is proposed by establishing a BP neural network model. Thus we can obtain the transform from IMU inertial measurements to images and achieve camera-IMU relative pose calibration.
APA, Harvard, Vancouver, ISO, and other styles
39

Van Crombrugge, Izaak, Rudi Penne, and Steve Vanlanduit. "Extrinsic Camera Calibration with Line-Laser Projection." Sensors 21, no. 4 (2021): 1091. http://dx.doi.org/10.3390/s21041091.

Full text
Abstract:
Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the c
APA, Harvard, Vancouver, ISO, and other styles
40

Garcia-Salguero, Mercedes, Javier Gonzalez-Jimenez, and Francisco-Angel Moreno. "Human 3D Pose Estimation with a Tilting Camera for Social Mobile Robot Interaction." Sensors 19, no. 22 (2019): 4943. http://dx.doi.org/10.3390/s19224943.

Full text
Abstract:
Human–Robot interaction represents a cornerstone of mobile robotics, especially within the field of social robots. In this context, user localization becomes of crucial importance for the interaction. This work investigates the capabilities of wide field-of-view RGB cameras to estimate the 3D position and orientation (i.e., the pose) of a user in the environment. For that, we employ a social robot endowed with a fish-eye camera hosted in a tilting head and develop two complementary approaches: (1) a fast method relying on a single image that estimates the user pose from the detection of their
APA, Harvard, Vancouver, ISO, and other styles
41

Xu, Chao, Guoxu Li, Ye Bai, Yuzhuo Bai, Zheng Cao, and Cheng Han. "Tracking and Registration Technology Based on Panoramic Cameras." Applied Sciences 15, no. 13 (2025): 7397. https://doi.org/10.3390/app15137397.

Full text
Abstract:
Augmented reality (AR) has become a research focus in computer vision and graphics, with growing applications driven by advances in artificial intelligence and the emergence of the metaverse. Panoramic cameras offer new opportunities for AR due to their wide field of view but also pose significant challenges for camera pose estimation because of severe distortion and complex scene textures. To address these issues, this paper proposes a lightweight, unsupervised deep learning model for panoramic camera pose estimation. The model consists of a depth estimation sub-network and a pose estimation
APA, Harvard, Vancouver, ISO, and other styles
42

Rajarathinam, Robin Jephthah, Chris Palaguachi, and Jina Kang. "360-Degree Cameras vs Traditional Cameras in Multimodal Learning Analytics: Comparative Study of Facial Recognition and Pose Estimation." Journal of Educational Data Mining 17, no. 1 (2025): 157–82. https://doi.org/10.5281/zenodo.14966499.

Full text
Abstract:
Multimodal Learning Analytics (MMLA) has emerged as a powerful approach within the computer-supported collaborative learning community, offering nuanced insights into learning processes through diverse data sources. Despite its potential, the prevalent reliance on traditional instruments such as tripod-mounted digital cameras for video capture often results in suboptimal data quality for facial expressions and poses captured, which is crucial for understanding collaborative dynamics. This study introduces an innovative approach to overcome this limitation by employing 360-degree camera technol
APA, Harvard, Vancouver, ISO, and other styles
43

O’Mahony, Niall, Lenka Krpalkova, Gearoid Sayers, Lea Krump, Joseph Walsh, and Daniel Riordan. "Two- and Three-Dimensional Computer Vision Techniques for More Reliable Body Condition Scoring." Dairy 4, no. 1 (2022): 1–25. http://dx.doi.org/10.3390/dairy4010001.

Full text
Abstract:
This article identifies the essential technologies and considerations for the development of an Automated Cow Monitoring System (ACMS) which uses 3D camera technology for the assessment of Body Condition Score (BCS). We present a comparison of a range of common techniques at the different developmental stages of Computer Vision including data pre-processing and the implementation of Deep Learning for both 2D and 3D data formats commonly captured by 3D cameras. This research focuses on attaining better reliability from one deployment of an ACMS to the next and proposes a Geometric Deep Learning
APA, Harvard, Vancouver, ISO, and other styles
44

Ma, Xiaojie, Jieyu Zhang, Tianchao Miao, Fawen Xie, and Zhongqiu Geng. "Measurement Approach for the Pose of Flanges in Cabin Assemblies through Distributed Vision." Sensors 24, no. 14 (2024): 4484. http://dx.doi.org/10.3390/s24144484.

Full text
Abstract:
The relative rotation angle between two cabins should be automatically and precisely obtained during automated assembly processes for spacecraft and aircraft. This paper introduces a method to solve this problem based on distributed vision, where two groups of cameras are employed to take images of mating features, such as dowel pins and holes, in oblique directions. Then, the relative rotation between the mating flanges of two cabins is calculated. The key point is the registration of the distributed cameras; thus, a simple and practical registration process is designed. It is assumed that th
APA, Harvard, Vancouver, ISO, and other styles
45

de Medeiros Esper, Ian, Oleh Smolkin, Maksym Manko, Anton Popov, Pål Johan From, and Alex Mason. "Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction." Applied Sciences 12, no. 9 (2022): 4134. http://dx.doi.org/10.3390/app12094134.

Full text
Abstract:
Advances in visual sensor devices and computing power are revolutionising the interaction of robots with their environment. Cameras that capture depth information along with a common colour image play a significant role. These devices are cheap, small, and fairly precise. The information provided, particularly point clouds, can be generated in a virtual computing environment, providing complete 3D information for applications. However, off-the-shelf cameras often have a limited field of view, both on the horizontal and vertical axis. In larger environments, it is therefore often necessary to c
APA, Harvard, Vancouver, ISO, and other styles
46

Dill, Sebastian, Maurice Rohr, Gökhan Güney, et al. "Accuracy Evaluation of 3D Pose Estimation with MediaPipe Pose for Physical Exercises." Current Directions in Biomedical Engineering 9, no. 1 (2023): 563–66. http://dx.doi.org/10.1515/cdbme-2023-1141.

Full text
Abstract:
Abstract With the recent increase in interest in machine learning and computer vision, camera-based pose estimation has emerged as a promising new technology. One of the most popular libraries for camera-based pose estimation is MediaPipe Pose due to its computational efficiency, ease of use, and the fact that it is open-source. However, little work has been performed to establish how accurate the library is and whether it is suitable for usage in, for example, physical therapy. This paper aims to provide an initial assessment of this. We find that the pose estimation is highly dependent on th
APA, Harvard, Vancouver, ISO, and other styles
47

Wei, Shanshan, Zhiqiang He, and Wei Xie. "Relative Pose Estimation Algorithm with Gyroscope Sensor." Journal of Sensors 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/8923587.

Full text
Abstract:
This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion) for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1) Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorith
APA, Harvard, Vancouver, ISO, and other styles
48

Zhou Wei, Zheng JinHua, and Xu HaiXia. "A Vector Method for camera Pose estimation." International Journal of Advancements in Computing Technology 4, no. 11 (2012): 185–94. http://dx.doi.org/10.4156/ijact.vol4.issue11.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

XIA Jun-ying, 夏军营, 徐小泉 XU Xiao-quan, and 熊九龙 XIONG Jiu-long. "Iterative pose estimation using paraperspective camera model." Optics and Precision Engineering 20, no. 6 (2012): 1342–49. http://dx.doi.org/10.3788/ope.20122006.1342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

YANG, Hao, Feng ZHANG, and Juntao YE. "A Camera-IMU Relative Pose Calibration Method." ROBOT 33, no. 4 (2011): 419–26. http://dx.doi.org/10.3724/sp.j.1218.2011.00419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!