To see the other types of publications on this topic, follow the link: View cameras.

Journal articles on the topic 'View cameras'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'View cameras.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Anjum, Nadeem. "Camera Localization in Distributed Networks Using Trajectory Estimation." Journal of Electrical and Computer Engineering 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/604647.

Full text
Abstract:
This paper presents an algorithm for camera localization using trajectory estimation (CLUTE) in a distributed network of nonoverlapping cameras. The algorithm recovers the extrinsic calibration parameters, namely, the relative position and orientation of the camera network on a common ground plane coordinate system. We first model the observed trajectories in each camera's field of view using Kalman filtering, then we use this information to estimate the missing trajectory information in the unobserved areas by fusing the results of a forward and backward linear regression estimation from adja
APA, Harvard, Vancouver, ISO, and other styles
2

Agnello, F. "PERSPECTIVE RESTITUTION FROM VIEW CAMERAS PHOTOS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-2/W1-2022 (February 25, 2022): 17–24. http://dx.doi.org/10.5194/isprs-archives-xlvi-2-w1-2022-17-2022.

Full text
Abstract:
Abstract. The paper aims at discussing the accuracy of perspective restitution from view camera photos; view cameras are non-standard cameras frequently used in the past century for on field shooting of buildings and urban sites; this is why the reconstruction of lost buildings often deals with photos taken with a view camera. The case study chosen for the proposed experiment is an urban complex built in Palermo in the ‘50s. The site features a very regular layout with surfaces at right angle, that supports the graphic reconstruction of photos’ inner and outer orientation. The site has been su
APA, Harvard, Vancouver, ISO, and other styles
3

Fan, Zhen, Xiu Li, and Yipeng Li. "Multi-Agent Deep Reinforcement Learning for Online 3D Human Poses Estimation." Remote Sensing 13, no. 19 (2021): 3995. http://dx.doi.org/10.3390/rs13193995.

Full text
Abstract:
Most multi-view based human pose estimation techniques assume the cameras are fixed. While in dynamic scenes, the cameras should be able to move and seek the best views to avoid occlusions and extract 3D information of the target collaboratively. In this paper, we address the problem of online view selection for a fixed number of cameras to estimate multi-person 3D poses actively. The proposed method exploits a distributed multi-agent based deep reinforcement learning framework, where each camera is modeled as an agent, to optimize the action of all the cameras. An inter-agent communication pr
APA, Harvard, Vancouver, ISO, and other styles
4

Dang, Chang Gwon, Seung Soo Lee, Mahboob Alam, et al. "Korean Cattle 3D Reconstruction from Multi-View 3D-Camera System in Real Environment." Sensors 24, no. 2 (2024): 427. http://dx.doi.org/10.3390/s24020427.

Full text
Abstract:
The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow growth. In this paper, a system of cameras is built to synchronously capture 3D data and then reconstruct a 3D mesh representation. In general, to reconstruct non-rigid objects, a system of cameras is synchronized and calibrated, and then the data of each camera are transformed to global coordinates. However, when
APA, Harvard, Vancouver, ISO, and other styles
5

Steger, Carsten, and Markus Ulrich. "A Multi-view Camera Model for Line-Scan Cameras with Telecentric Lenses." Journal of Mathematical Imaging and Vision 64, no. 2 (2021): 105–30. http://dx.doi.org/10.1007/s10851-021-01055-x.

Full text
Abstract:
AbstractWe propose a novel multi-view camera model for line-scan cameras with telecentric lenses. The camera model supports an arbitrary number of cameras and assumes a linear relative motion with constant velocity between the cameras and the object. We distinguish two motion configurations. In the first configuration, all cameras move with independent motion vectors. In the second configuration, the cameras are mounted rigidly with respect to each other and therefore share a common motion vector. The camera model can model arbitrary lens distortions by supporting arbitrary positions of the li
APA, Harvard, Vancouver, ISO, and other styles
6

McKay, Carolyn, and Murray Lee. "Body-worn images: Point-of-view and the new aesthetics of policing." Crime, Media, Culture: An International Journal 16, no. 3 (2019): 431–50. http://dx.doi.org/10.1177/1741659019873774.

Full text
Abstract:
Police organisations across much of the Western world have eagerly embraced body-worn video camera technology, seen as a way to enhance public trust in police, provide transparency in policing activity, reduce conflict between police and citizens and provide a police perspective of incidents and events. Indeed, the cameras have become an everyday piece of police ‘kit’. Despite the growing ubiquity of the body-worn video camera, understandings of the nature and value of the audiovisual footage produced by police remain inchoate. Given body-worn video camera’s promise of veracity, this article i
APA, Harvard, Vancouver, ISO, and other styles
7

Yin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou, and Jilong Zhang. "Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing." Remote Sensing 10, no. 8 (2018): 1298. http://dx.doi.org/10.3390/rs10081298.

Full text
Abstract:
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero phot
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yaning, Tianhao Wu, Jungang Yang, and Wei An. "Infrared Camera Array System and Self-Calibration Method for Enhanced Dim Target Perception." Remote Sensing 16, no. 16 (2024): 3075. http://dx.doi.org/10.3390/rs16163075.

Full text
Abstract:
Camera arrays can enhance the signal-to-noise ratio (SNR) between dim targets and backgrounds through multi-view synthesis. This is crucial for the detection of dim targets. To this end, we design and develop an infrared camera array system with a large baseline. The multi-view synthesis of camera arrays relies heavily on the calibration accuracy of relative poses in the sub-cameras. However, the sub-cameras within a camera array lack strict geometric constraints. Therefore, most current calibration methods still consider the camera array as multiple pinhole cameras for calibration. Moreover,
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Zhe, Zhaozong Meng, Nan Gao, and Zonghua Zhang. "Calibration of the Relative Orientation between Multiple Depth Cameras Based on a Three-Dimensional Target." Sensors 19, no. 13 (2019): 3008. http://dx.doi.org/10.3390/s19133008.

Full text
Abstract:
Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for
APA, Harvard, Vancouver, ISO, and other styles
10

Chiu, Cheng Yu, Chih Han Chang, Hsin Jung Lin, and Tsong Liang Huang. "New Lane Departure Warning System Based on Side-View Cameras." Applied Mechanics and Materials 764-765 (May 2015): 1361–65. http://dx.doi.org/10.4028/www.scientific.net/amm.764-765.1361.

Full text
Abstract:
This paper addressed a new lane departure warning system (LDWS). We used the side-view cameras to promote Advanced Driver Assistance Systems (ADAS). A left side-view camera detected the right lane next to vehicle, and a right side-view camera detected the right lane. Two cameras processed in their algorithm and gave warning message, independently and separately. Our algorithm combined those warning messages to analyze environment situations. At the end, we used the LUXGEN MPV to test and showed results of verifications and tests.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Yitao, Sheng Wang, Mengjie Xu, et al. "MUC: Mixture of Uncalibrated Cameras for Robust 3D Human Body Reconstruction." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 10 (2025): 11040–48. https://doi.org/10.1609/aaai.v39i10.33200.

Full text
Abstract:
Multiple cameras can provide comprehensive multi-view video coverage of a person. Fusing this multi-view data is crucial for tasks like behavioral analysis, although it traditionally requires camera calibration—a process that is often complex. Moreover, previous studies have overlooked the challenges posed by self-occlusion under multiple views and the continuity of human body shape estimation. In this study, we introduce a method to reconstruct the 3D human body from multiple uncalibrated camera views. Initially, we utilize a pre-trained human body encoder to process each camera view individu
APA, Harvard, Vancouver, ISO, and other styles
12

Vishwas Venkat and Raja Reddy. "Review and analysis of the properties of 360-degree surround view cameras in autonomous vehicles." International Journal of Science and Research Archive 8, no. 1 (2023): 656–61. http://dx.doi.org/10.30574/ijsra.2023.8.2.0333.

Full text
Abstract:
Autonomous vehicles are becoming increasingly prevalent in today's world, and the demand for efficient and safe self-driving technology has never been higher. One key component of this technology is the 360-degree surround view camera, which provides a complete view of the vehicle's surroundings, allowing it to navigate safely and efficiently. In this paper, we conduct a comprehensive review and analysis of the properties of these cameras in autonomous vehicles. We begin by exploring the various types of 360-degree cameras available, including fisheye, parabolic, and mirror-based designs. We t
APA, Harvard, Vancouver, ISO, and other styles
13

Abdulsattar, Fatimah. "The Effect of Using Projective Cameras on View-Independent Gait Recognition Performance." Iraqi Journal for Electrical and Electronic Engineering 14, no. 1 (2018): 22–29. http://dx.doi.org/10.37917/ijeee.14.1.3.

Full text
Abstract:
Gait as a biometric can be used to identify subjects at a distance and thus it receives great attention from the research community for security and surveillance applications. One of the challenges that affects gait recognition performance is view variation. Much work has been done to tackle this challenge. However, the majority of the work assumes that gait silhouettes are captured by affine cameras where only the height of silhouettes changes and the difference in viewing angle of silhouettes in one gait cycle is relatively small. In this paper, we analyze the variation in gait recognition p
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Ruicong, and Feng Lu. "UVAGaze: Unsupervised 1-to-2 Views Adaptation for Gaze Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 4 (2024): 3693–701. http://dx.doi.org/10.1609/aaai.v38i4.28159.

Full text
Abstract:
Gaze estimation has become a subject of growing interest in recent research. Most of the current methods rely on single-view facial images as input. Yet, it is hard for these approaches to handle large head angles, leading to potential inaccuracies in the estimation. To address this issue, adding a second-view camera can help better capture eye appearance. However, existing multi-view methods have two limitations. 1) They require multi-view annotations for training, which are expensive. 2) More importantly, during testing, the exact positions of the multiple cameras must be known and match tho
APA, Harvard, Vancouver, ISO, and other styles
15

Brucato, Ben. "Policing Made Visible: Mobile Technologies and the Importance of Point of View." Surveillance & Society 13, no. 3/4 (2015): 455–73. http://dx.doi.org/10.24908/ss.v13i3/4.5421.

Full text
Abstract:
Cameras are ubiquitous and increasingly mobile. While CCTV has captured considerable attention by surveillance researchers, the new visibility of police activities is increasingly produced by incidental sousveillance and wearable on-officer camera systems. This article considers advocacy for policing’s new visibility, contrasting that of police accountability activists who film police with designers and early adopters of on-officer cameras. In both accounts, these devices promise accountability by virtue of their mechanical objectivity. However, to each party, accountability functions rather d
APA, Harvard, Vancouver, ISO, and other styles
16

García-Ruiz, Pablo, Francisco J. Romero-Ramirez, Rafael Muñoz-Salinas, Manuel J. Marín-Jiménez, and Rafael Medina-Carnicer. "Sparse Indoor Camera Positioning with Fiducial Markers." Applied Sciences 15, no. 4 (2025): 1855. https://doi.org/10.3390/app15041855.

Full text
Abstract:
Accurately estimating the pose of large arrays of fixed indoor cameras presents a significant challenge in computer vision, especially since traditional methods predominantly rely on overlapping camera views. Existing approaches for positioning non-overlapping cameras are scarce and generally limited to simplistic scenarios dependent on specific environmental features, thereby leaving a significant gap in applications for large and complex settings. To bridge this gap, this paper introduces a novel methodology that effectively positions cameras with and without overlapping views in complex ind
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Jiadun, Shengtao Li, and Kai Huang. "Point Cloud Fusion of Human Respiratory Motion Under Multi-View Time-of-Flight Camera System: Voxelization Method Using 2D Voxel Block Index." Sensors 25, no. 10 (2025): 3062. https://doi.org/10.3390/s25103062.

Full text
Abstract:
Time-of-flight (ToF) 3D cameras can obtain a real-time point cloud of human respiratory motion in medical robot scenes. Through this point cloud, real-time displacement information can be provided for the medical robot to avoid the robot injuring the human body during the operation due to the positioning deviation. However, multi-camera deployments face a conflict between spatial coverage and measurement accuracy due to the limitations of different types of ToF modulation. To address this, we design a multi-camera acquisition system incorporating different modulation schemes and propose a mult
APA, Harvard, Vancouver, ISO, and other styles
18

Hanel, A., and U. Stilla. "STRUCTURE-FROM-MOTION FOR CALIBRATION OF A VEHICLE CAMERA SYSTEM WITH NON-OVERLAPPING FIELDS-OF-VIEW IN AN URBAN ENVIRONMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 31, 2017): 181–88. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-181-2017.

Full text
Abstract:
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-o
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Zhong Yan, Guo Quan Wang, and Dong Ping Wang. "A 3D Reconstruction Method Based on Binocular View Geometry." Applied Mechanics and Materials 33 (October 2010): 299–303. http://dx.doi.org/10.4028/www.scientific.net/amm.33.299.

Full text
Abstract:
A method was proposed to gain three-dimensional (3D) reconstruction based on binocular view geometry. Images used to calibrate cameras and reconstruct car’s rearview mirror by image acquisition system, by calibration image, a camera's intrinsic and extrinsic parameters, projective and fundamental matrixes were drawn by Matlab7.1;the collected rearview mirror images is pretreated to draw refined laser, extracted feature points, find the very appropriate match points by epipolar geometry principle; according to the camera imaging model to calculate the coordinates of space points, display point
APA, Harvard, Vancouver, ISO, and other styles
20

Choi, Kyoungtaek, Ho Jung, and Jae Suhr. "Automatic Calibration of an Around View Monitor System Exploiting Lane Markings." Sensors 18, no. 9 (2018): 2956. http://dx.doi.org/10.3390/s18092956.

Full text
Abstract:
This paper proposes a method that automatically calibrates four cameras of an around view monitor (AVM) system in a natural driving situation. The proposed method estimates orientation angles of four cameras composing the AVM system, and assumes that their locations and intrinsic parameters are known in advance. This method utilizes lane markings because they exist in almost all on-road situations and appear across images of adjacent cameras. It starts by detecting lane markings from images captured by four cameras of the AVM system in a cost-effective manner. False lane markings are rejected
APA, Harvard, Vancouver, ISO, and other styles
21

Shahjalal, Md, Moh Khalid Hasan, Mostafa Zaman Chowdhury, and Yeong Min Jang. "Smartphone Camera-Based Optical Wireless Communication System: Requirements and Implementation Challenges." Electronics 8, no. 8 (2019): 913. http://dx.doi.org/10.3390/electronics8080913.

Full text
Abstract:
Visible light and infrared bands of the optical spectrum used for optical camera communication (OCC) are becoming a promising technology nowadays. Researchers are proposing new OCC-based architectures and applications in both indoor and outdoor systems using the embedded cameras on smartphones, with a view to making them user-friendly. Smartphones have useful features for developing applications using the complementary metal-oxide-semiconductor cameras, which can receive data from optical transmitters. However, several challenges have arisen in increasing the capacity and communication range,
APA, Harvard, Vancouver, ISO, and other styles
22

Zhao, Ruiyi, Yangshi Ge, Ye Duan, and Quanhong Jiang. "Large-field Gesture Tracking and Recognition for Augmented Reality Interaction." Journal of Physics: Conference Series 2560, no. 1 (2023): 012016. http://dx.doi.org/10.1088/1742-6596/2560/1/012016.

Full text
Abstract:
Abstract In recent years, with the continuous development of computer vision and artificial intelligence technology, gesture recognition is widely used in many fields, such as virtual reality, augmented reality and so on. However, the traditional binocular camera architecture is limited by its limited field of view Angle and depth perception range. Fisheye camera is gradually applied in gesture recognition field because of its advantage of larger field of view Angle. Fisheye cameras offer a wider field of vision than previous binocular cameras, allowing for a greater range of gesture recogniti
APA, Harvard, Vancouver, ISO, and other styles
23

Ramm, Roland, Pedro de Dios Cruz, Stefan Heist, Peter Kühmstedt, and Gunther Notni. "Fusion of Multimodal Imaging and 3D Digitization Using Photogrammetry." Sensors 24, no. 7 (2024): 2290. http://dx.doi.org/10.3390/s24072290.

Full text
Abstract:
Multimodal sensors capture and integrate diverse characteristics of a scene to maximize information gain. In optics, this may involve capturing intensity in specific spectra or polarization states to determine factors such as material properties or an individual’s health conditions. Combining multimodal camera data with shape data from 3D sensors is a challenging issue. Multimodal cameras, e.g., hyperspectral cameras, or cameras outside the visible light spectrum, e.g., thermal cameras, lack strongly in terms of resolution and image quality compared with state-of-the-art photo cameras. In this
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Yifan, Zhenlei Lyu, Peng Fan та ін. "A Wide Energy Range and 4π-View Gamma Camera with Interspaced Position-Sensitive Scintillator Array and Embedded Heavy Metal Bars". Sensors 23, № 2 (2023): 953. http://dx.doi.org/10.3390/s23020953.

Full text
Abstract:
(1) Background: Gamma cameras have wide applications in industry, including nuclear power plant monitoring, emergency response, and homeland security. The desirable properties of a gamma camera include small weight, good resolution, large field of view (FOV), and wide imageable source energy range. Compton cameras can have a 4π FOV but have limited sensitivity at low energy. Coded-aperture gamma cameras are operatable at a wide photon energy range but typically have a limited FOV and increased weight due to the thick heavy metal collimators and shielding. In our lab, we previously proposed a 4
APA, Harvard, Vancouver, ISO, and other styles
25

Van Crombrugge, Izaak, Rudi Penne, and Steve Vanlanduit. "Extrinsic Camera Calibration with Line-Laser Projection." Sensors 21, no. 4 (2021): 1091. http://dx.doi.org/10.3390/s21041091.

Full text
Abstract:
Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the c
APA, Harvard, Vancouver, ISO, and other styles
26

Nakada, Ryuji, Masanori Takigawa, Tomowo Ohga, and Noritsuna Fujii. "VERIFICATION OF POTENCY OF AERIAL DIGITAL OBLIQUE CAMERAS FOR AERIAL PHOTOGRAMMETRY IN JAPAN." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 2, 2016): 63–68. http://dx.doi.org/10.5194/isprs-archives-xli-b1-63-2016.

Full text
Abstract:
Digital oblique aerial camera (hereinafter called “oblique cameras”) is an assembly of medium format digital cameras capable of shooting digital aerial photographs in five directions i.e. nadir view and oblique views (forward and backward, left and right views) simultaneously and it is used for shooting digital aerial photographs efficiently for generating 3D models in a wide area. <br><br> For aerial photogrammetry of public survey in Japan, it is required to use large format cameras, like DMC and UltraCam series, to ensure aerial photogrammetric accuracy. <br><br> Alt
APA, Harvard, Vancouver, ISO, and other styles
27

Juarez-Salazar, Rigoberto. "Flat mirrors, virtual rear-view cameras, and camera-mirror calibration." Optik 317 (November 2024): 172067. http://dx.doi.org/10.1016/j.ijleo.2024.172067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Astrid, Marcella, and Seung‐Ik Lee. "Assembling three one‐camera images for three‐camera intersection classification." ETRI Journal 45, no. 5 (2023): 862–73. http://dx.doi.org/10.4218/etrij.2023-0100.

Full text
Abstract:
AbstractDetermining whether an autonomous self‐driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three‐camera model, which would enable us to more easily compi
APA, Harvard, Vancouver, ISO, and other styles
29

Obayashi, Mizuki, Shohei Mori, Hideo Saito, Hiroki Kajita, and Yoshifumi Takatsume. "Multi-View Surgical Camera Calibration with None-Feature-Rich Video Frames: Toward 3D Surgery Playback." Applied Sciences 13, no. 4 (2023): 2447. http://dx.doi.org/10.3390/app13042447.

Full text
Abstract:
Mounting multi-view cameras within a surgical light is a practical choice since some cameras are expected to observe surgery with few occlusions. Such multi-view videos must be reassembled for easy reference. A typical way is to reconstruct the surgery in 3D. However, the geometrical relationship among cameras is changed because each camera independently moves every time the lighting is reconfigured (i.e., every time surgeons touch the surgical light). Moreover, feature matching between surgical images is potentially challenging because of missing rich features. To address the challenge, we pr
APA, Harvard, Vancouver, ISO, and other styles
30

Oberg, Andrew. "THIS HAS GOT NOTHING TO DO WITH GEORGE." Think 13, no. 37 (2014): 47–55. http://dx.doi.org/10.1017/s1477175613000468.

Full text
Abstract:
Security cameras have become a ubiquitous part of everyday life in most major cities, yet each new camera seems to come with cries of foul play by defenders of privacy rights. Our long history with these cameras and CCTV networks does not seem to have alleviated our concerns with being watched, and as we feel ourselves losing privacy in other areas the worry generated by security cameras has remained. Our feelings of disquiet, however, are unnecessary as they stem from an erroneous view of the self. The following argues that this view of an autonomous and atomistic self is both detrimental and
APA, Harvard, Vancouver, ISO, and other styles
31

Chino, Masaki, Junwoon Lee, Qi An, and Atsushi Yamashita. "Robot Localization by Data Integration of Multiple Thermal Cameras in Low-Light Environment." International Journal of Automation Technology 19, no. 4 (2025): 566–74. https://doi.org/10.20965/ijat.2025.p0566.

Full text
Abstract:
A method is proposed for interpolating pose information by integrating data from multiple thermal cameras when a global navigation satellite system temporarily experiences a decrease in accuracy. When temperature information obtained from thermal cameras is visualized, a two-stage temperature range restriction is applied to focus only on areas with temperature variations, making conversion into clearer images possible. To compensate for the narrow field of view of thermal cameras, multiple thermal cameras are oriented in different directions. Pose estimation is performed with each camera, and
APA, Harvard, Vancouver, ISO, and other styles
32

Hillemann, M., and B. Jutzi. "UCalMiCeL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W3 (August 18, 2017): 17–24. http://dx.doi.org/10.5194/isprs-annals-iv-2-w3-17-2017.

Full text
Abstract:
Unmanned Aerial Vehicle (UAV) with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the <i>Unified Intrinsic
APA, Harvard, Vancouver, ISO, and other styles
33

Perez-Yus, Alejandro, Nicolás Gonzalo López, and Jose J. Guerrero. "Scaled layout recovery with wide field of view RGB-D." Image and Vision Computing 87 (May 2, 2019): 76–96. https://doi.org/10.1016/j.imavis.2019.04.008.

Full text
Abstract:
In this work, we propose a method that integrates depth and fisheye cameras to obtain a wide 3D scene reconstruction with scale in one single shot. The motivation of such integration is to overcome the narrow field of view in consumer RGB-D cameras and lack of depth and scale information in fisheye cameras. The hybrid camera system we use is easy to build and calibrate, and currently consumer devices with similar configuration are already available in the market. With this system, we have a portion of the scene with shared field of view that provides simultaneously color and depth. In the rest
APA, Harvard, Vancouver, ISO, and other styles
34

Nakada, Ryuji, Masanori Takigawa, Tomowo Ohga, and Noritsuna Fujii. "VERIFICATION OF POTENCY OF AERIAL DIGITAL OBLIQUE CAMERAS FOR AERIAL PHOTOGRAMMETRY IN JAPAN." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 2, 2016): 63–68. http://dx.doi.org/10.5194/isprsarchives-xli-b1-63-2016.

Full text
Abstract:
Digital oblique aerial camera (hereinafter called “oblique cameras”) is an assembly of medium format digital cameras capable of shooting digital aerial photographs in five directions i.e. nadir view and oblique views (forward and backward, left and right views) simultaneously and it is used for shooting digital aerial photographs efficiently for generating 3D models in a wide area. <br><br> For aerial photogrammetry of public survey in Japan, it is required to use large format cameras, like DMC and UltraCam series, to ensure aerial photogrammetric accuracy. <
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Yuru, Yikun Liu, Haishan Liu, et al. "Multi-View Optical Image Fusion and Reconstruction for Defogging without a Prior In-Plane." Photonics 8, no. 10 (2021): 454. http://dx.doi.org/10.3390/photonics8100454.

Full text
Abstract:
Image fusion and reconstruction from muldti-images taken by distributed or mobile cameras need accurate calibration to avoid image mismatching. This calibration process becomes difficult in fog when no clear nearby reference is available. In this work, the fusion of multi-view images taken in fog by two cameras fixed on a moving platform is realized. The positions and aiming directions of the cameras are determined by taking a close visible object as a reference. One camera with a large field of view (FOV) is applied to acquire images of a short-distance object which is still visible in fog. T
APA, Harvard, Vancouver, ISO, and other styles
36

YAO, YI, CHUNG-HAO CHEN, BESMA ABIDI, DAVID PAGE, ANDREAS KOSCHAN, and MONGI ABIDI. "MULTI-CAMERA POSITIONING FOR AUTOMATED TRACKING SYSTEMS IN DYNAMIC ENVIRONMENTS." International Journal of Information Acquisition 07, no. 03 (2010): 225–42. http://dx.doi.org/10.1142/s0219878910002208.

Full text
Abstract:
Most existing camera placement algorithms focus on coverage and/or visibility analysis, which ensures that the object of interest is visible in the camera's field of view (FOV). According to recent literature, handoff safety margin is introduced to sensor planning so that sufficient overlapped FOVs among adjacent cameras are reserved for successful and smooth target transition. In this paper, we investigate the sensor planning problem when considering the dynamic interactions between moving targets and observing cameras. The probability of camera overload is explored to model the aforementione
APA, Harvard, Vancouver, ISO, and other styles
37

Birukov, Elissey D., Boris Kh Barladyan, Lev Z. Shapiro, Ildar V. Valiev, and Alexei G. Voloboy. "Modelling and Verification of Car Rear View Camera Using Ray Optics Algorithms." Light & Engineering, no. 02-2024 (April 2024): 55–62. http://dx.doi.org/10.33383/2023-080.

Full text
Abstract:
Rear view cameras are widely used in the automotive industry. They are used in modern car navigation systems to improve the driver’s perception of the situation behind the car. Ultra-wide-angle fisheye lenses are installed on the car for maximum coverage. But such images are not comfortable for human perception. Therefore, one of the main problems in using such cameras is fast algorithms for converting fisheye images into a set of images corresponding to wide-angle and normal virtual cameras, as well as constructing a “top view”. This work examines two image transformation algorithms, both of
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, De, and Qingbin Wang. "A new vision measurement method based on active object gazing." International Journal of Advanced Robotic Systems 14, no. 4 (2017): 172988141771598. http://dx.doi.org/10.1177/1729881417715984.

Full text
Abstract:
A new vision measurement system is developed with two cameras. One is fixed in pose to serve as a monitor camera. It finds and tracks objects in image space. The other is actively rotated to track the object in Cartesian space, working as an active object-gazing camera. The intrinsic parameters of the monitor camera are calibrated. The view angle corresponding to the object is calculated from the object’s image coordinates and the camera’s intrinsic parameters. The rotation angle of the object-gazing camera is measured with an encoder. The object’s depth is computed with the rotation angle and
APA, Harvard, Vancouver, ISO, and other styles
39

Qi, Xing Guang, and Yi Zhen. "Research of the Paper Defect On-Line Inspection System Based on Distributed Machine Vision." Advanced Materials Research 562-564 (August 2012): 1805–8. http://dx.doi.org/10.4028/www.scientific.net/amr.562-564.1805.

Full text
Abstract:
This paper presents a distributed machine vision inspection system, which has a large field of view (FOV) and can perform high precision, high speed real-time inspection for wide paper sheet detection. The system consists of multiple GigE Vision linescan cameras which connected though Gigabit Ethernet. The cameras are arranged into a linear array so that every camera’s FOV is merged into one large FOV in the meantime the resolution keeps unchanged. In order to acquire high processing speed, the captured images from each camera are sent into one dedicate computer for distributed and parallel im
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Ming Jing, Yu Bing Dong, and Guang Liang Cheng. "Multiple CMOS Intersection Measuring System Modeling and Analysis." Advanced Materials Research 614-615 (December 2012): 1299–302. http://dx.doi.org/10.4028/www.scientific.net/amr.614-615.1299.

Full text
Abstract:
Multiple high speed CMOS cameras composing intersection system to splice large effect field of view(EFV). The key problem of system is how to locate multiple CMOS cameras in suitable position. Effect field of view was determined according to size, quantity and dispersion area of objects, so to determine camera position located on below, both sides and ahead to moving targets. This paper analyzes effect splicing field of view, operating range etc through establishing mathematical model and MATLAB simulation. Location method of system has advantage of flexibility splicing, convenient adjustment,
APA, Harvard, Vancouver, ISO, and other styles
41

Zatserkovnyy, Aleksander, and Evgeni Nurminski. "Identification of Location and Camera Parameters for Public Live Streaming Web Cameras." Mathematics 10, no. 19 (2022): 3601. http://dx.doi.org/10.3390/math10193601.

Full text
Abstract:
Public live streaming web cameras are quite common now and widely used by drivers for qualitative analysis of traffic conditions. At the same time, they can be a valuable source of quantitative information on transport flows and speed for the development of urban traffic models. However, to obtain reliable data from raw video streams, it is necessary to preprocess them, considering the camera location and parameters without direct access to the camera. Here we suggest a procedure for estimating camera parameters, which allows us to determine pixel coordinates for a point cloud in the camera’s
APA, Harvard, Vancouver, ISO, and other styles
42

Zhao, Aojie, Yifan Liu, Kun Cheng, Aiping Ma, and Jianguo Yu. "A top-view indoor localization based on discrete distillation of CLIP." Journal of Physics: Conference Series 2816, no. 1 (2024): 012018. http://dx.doi.org/10.1088/1742-6596/2816/1/012018.

Full text
Abstract:
Abstract Indoor robot localization is a challenging problem in computer vision due to sensor obstacles in a crowded environment. Pure vision localization is increasingly popular since it does not require sensors other than low-cost cameras. We adopt a top-view camera setup, effectively avoiding the problem of positioning failure due to potential occlusion of front-view cameras. We use the distilling of a pre-trained large-scale vision language CLIP model to improve the performance degradation caused by the small data set size. Our solution achieved promising performance in our customized class
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Li-Heng, YuJu Cheng, and Tyng-Luh Liu. "Tracking Everything Everywhere across Multiple Cameras." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 7 (2025): 7789–97. https://doi.org/10.1609/aaai.v39i7.32839.

Full text
Abstract:
Pixel tracking in single-view video sequences has recently emerged as a significant area of research. While previous work has primarily concentrated on tracking within a given video, we propose to expand pixel correspondence estimation into multi-view scenarios. The central concept involves utilizing a canonical space that preserves a universal 3D representation across different views and timesteps. This model allows for precise tracking of points even through prolonged occlusions and significant deformations in appearance between views. Moreover, we show that our model, through the use of an
APA, Harvard, Vancouver, ISO, and other styles
44

Almalkawi, Islam T., Rami Halloush, Mohammad F. Al-Hammouri, et al. "Intelligent IoT-Based Network Clustering and Camera Distribution Algorithm Using Reinforcement Learning." Technologies 13, no. 1 (2024): 4. https://doi.org/10.3390/technologies13010004.

Full text
Abstract:
The advent of a wide variety of affordable communication devices and cameras has enabled IoT systems to provide effective solutions for a wide range of civil and military applications. One of the potential applications is a surveillance system in which several cameras collaborate to monitor a specific area. However, existing surveillance systems are often based on traditional camera distribution and come with additional communication costs and redundancy in the detection range. Thus, we propose a smart and efficient camera distribution system based on machine learning using two Reinforcement L
APA, Harvard, Vancouver, ISO, and other styles
45

Baek, Seung-Hae, Pathum Rathnayaka, and Soon-Yong Park. "Calibration of a Stereo Radiation Detection Camera Using Planar Homography." Journal of Sensors 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/8928096.

Full text
Abstract:
This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the
APA, Harvard, Vancouver, ISO, and other styles
46

Syawaludin, Muhammad Firdaus, Myungho Lee, and Jae-In Hwang. "Foveation Pipeline for 360° Video-Based Telemedicine." Sensors 20, no. 8 (2020): 2264. http://dx.doi.org/10.3390/s20082264.

Full text
Abstract:
Pan-tilt-zoom (PTZ) and omnidirectional cameras serve as a video-mediated communication interface for telemedicine. Most cases use either PTZ or omnidirectional cameras exclusively; even when used together, images from the two are shown separately on 2D displays. Conventional foveated imaging techniques may offer a solution for exploiting the benefits of both cameras, i.e., the high resolution of the PTZ camera and the wide field-of-view of the omnidirectional camera, but displaying the unified image on a 2D display would reduce the benefit of “omni-” directionality. In this paper, we introduc
APA, Harvard, Vancouver, ISO, and other styles
47

Stadnichuk, Viacheslav, and Valentin Kolobrodov. "Mathematical aspects of distortion calibration for digital cameras." Ukrainian Metrological Journal, no. 1 (April 12, 2023): 46–52. http://dx.doi.org/10.24027/2306-7039.1.2023.282602.

Full text
Abstract:
Nowadays, most human processes are automated by means of computerization. This process has not spared the automotive industry. The latest developments in this field give promise that in the near future, cars will be completely autonomous. However, before that, there is an urgent need to address a number of issues, such as increasing the angle of view for greater coverage of the road with minimal space curvature. It is known that when the angle of view increases, so does the distortion (a mismatch in geometric similarity between an object and its image). This mismatch significantly reduces the
APA, Harvard, Vancouver, ISO, and other styles
48

Alsadik, Bashar, Fabio Remondino, and Francesco Nex. "Simulating a Hybrid Acquisition System for UAV Platforms." Drones 6, no. 11 (2022): 314. http://dx.doi.org/10.3390/drones6110314.

Full text
Abstract:
Currently, there is a rapid trend in the production of airborne sensors consisting of multi-view cameras or hybrid sensors, i.e., a LiDAR scanner coupled with one or multiple cameras to enrich the data acquisition in terms of colors, texture, completeness of coverage, accuracy, etc. However, the current UAV hybrid systems are mainly equipped with a single camera that will not be sufficient to view the facades of buildings or other complex objects without having double flight paths with a defined oblique angle. This entails extensive flight planning, acquisition duration, extra costs, and data
APA, Harvard, Vancouver, ISO, and other styles
49

Wan, Cheng, and Jun Sato. "Multiple View Geometry for Moving Cameras." Journal of Computational and Theoretical Nanoscience 13, no. 5 (2016): 2867–73. http://dx.doi.org/10.1166/jctn.2016.4931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Thoeni, K., A. Giacomini, R. Murtagh, and E. Kniest. "A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 573–80. http://dx.doi.org/10.5194/isprsarchives-xl-5-573-2014.

Full text
Abstract:
This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!