To see the other types of publications on this topic, follow the link: Mono camera.

Journal articles on the topic 'Mono camera'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Mono camera.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xiao1, Yao, Xiaogang Ruan1, and Xiaoqing Zhu. "PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric Calibration." Journal of Autonomous Intelligence 1, no. 2 (January 21, 2019): 29. http://dx.doi.org/10.32629/jai.v1i2.33.

Full text
Abstract:
Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono.
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Tae-Jae, Hoon Lee, and Dong-Il Dan Cho. "Obstacle Detection Algorithm Using Forward-Viewing Mono Camera." Journal of Institute of Control, Robotics and Systems 21, no. 9 (September 1, 2015): 858–62. http://dx.doi.org/10.5302/j.icros.2015.15.0104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ghosh, Bijoy K., Di Xiao, Ning Xi, and Tzyh-Jong Tarn. "3D Part Manipulation Aided by Uncalibrated Mono-Camera *." IFAC Proceedings Volumes 30, no. 20 (September 1997): 741–46. http://dx.doi.org/10.1016/s1474-6670(17)44345-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mohedano, Raul, and Narciso Garcia. "Robust multi-camera 3D tracking from mono-camera 2d tracking using Bayesian Association." IEEE Transactions on Consumer Electronics 56, no. 1 (February 2010): 1–8. http://dx.doi.org/10.1109/tce.2010.5439118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

., Sandip Mahajan. "OBSTACLE DETECTION USING MONO VISION CAMERA AND LASER SCANNER." International Journal of Research in Engineering and Technology 02, no. 12 (December 25, 2013): 684–90. http://dx.doi.org/10.15623/ijret.2013.0212117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Davarci, Aylin, Nico Schick, and Reiner Marchthaler. "Detection of Perpendicular Parking Spaces with a Mono Camera." ATZ worldwide 120, no. 12 (November 30, 2018): 66–69. http://dx.doi.org/10.1007/s38311-018-0176-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sato, Kei, Keisuke Yoneda, Ryo Yanase, and Naoki Suganuma. "Mono-Camera-Based Robust Self-Localization Using LIDAR Intensity Map." Journal of Robotics and Mechatronics 32, no. 3 (June 20, 2020): 624–33. http://dx.doi.org/10.20965/jrm.2020.p0624.

Full text
Abstract:
An image-based self-localization method for automated vehicles is proposed herein. The general self-localization method estimates a vehicle’s location on a map by collating a predefined map with a sensor’s observation values. The same sensor, generally light detection and ranging (LIDAR), is used to acquire map data and observation values. In this study, to develop a low-cost self-localization system, we estimate the vehicle’s location on a LIDAR-created map using images captured by a mono-camera. The similarity distribution between a mono-camera image transformed into a bird’s-eye image and a map is created in advance by template matching the images. Furthermore, a method to estimate a vehicle’s location based on the acquired similarity is proposed. The proposed self-localization method is evaluated on the driving data from urban public roads; it is found that the proposed method improved the robustness of the self-localization system compared with the previous camera-based method.
APA, Harvard, Vancouver, ISO, and other styles
8

Chao Zhichao, 晁志超, 伏思华 Fu Sihua, 姜广文 Jiang Guangwen, and 于起峰 Yu Qifeng. "Mono Camera and Laser Rangefinding Sensor Position-Pose Measurement System." Acta Optica Sinica 31, no. 3 (2011): 0312001. http://dx.doi.org/10.3788/aos201131.0312001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Muslikhin, D. Irmawati, F. Arifin, A. Nasuha, N. Hasanah, and Y. Indrihapsari. "Prediction of XYZ coordinates from an image using mono camera." Journal of Physics: Conference Series 1456 (January 2020): 012015. http://dx.doi.org/10.1088/1742-6596/1456/1/012015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cho, Yeongcheol, Seungwoo Kim, and Seongkeun Park. "A Lane Following Mobile Robot Navigation System Using Mono Camera." Journal of Physics: Conference Series 806 (February 2017): 012003. http://dx.doi.org/10.1088/1742-6596/806/1/012003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ali, Md Osman, Md Faisal Ahmed, Moh Khalid Hasan, Md Shahjalal, Md Habibur Rahman, Israt Jahan, and Yeong Min Jang. "Mono Camera-Based Optical Vehicular Communication for an Advanced Driver Assistance System." Electronics 10, no. 13 (June 29, 2021): 1564. http://dx.doi.org/10.3390/electronics10131564.

Full text
Abstract:
This technical paper proposes a new waveform combining the low-rate and high-rate data streams to detect the region-of-interest signal in a high-mobility environment using optical camera communication. The proposed technique augments the bit rate of the low-rate stream; consequently, the link setup time is reduced and the requirement of low frame rate camera is eliminated. Additionally, both the low-rate and high-rate data streams in the proposed bi-level pulse position modulation are decoded with a unique adaptive thresholding mechanism with a high frame rate camera. We also propose a vehicle localization scheme to assist the drivers in maintaining a safe following distance that can significantly reduce the frequency of accidents. Moreover, two neural networks are proposed to detect the light-emitting diodes (LEDs) for localization and communication, and to estimate the road curvature from different rear LED shapes of the forwarding vehicle, respectively. The system is implemented, and its performance is analyzed in Python 3.7. The implementation results show that the proposed system is able to achieve 75% localization accuracy, a 150 bps low-rate stream, and a 600 bps high-rate stream over a range of 25 m with a commercial 30 fps camera.
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, HyeonSeok, MinGyu Park, WonIl Son, HyukDoo Choi, and SeongKeun Park. "Deep Learning based Object Detection and Distance Estimation using Mono Camera." Journal of Korean Institute of Intelligent Systems 28, no. 3 (June 30, 2018): 201–9. http://dx.doi.org/10.5391/jkiis.2018.28.3.201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

YANG, Cheng. "Structure Design of Binocular Vision Sensor Using Mono-camera with Mirrors." Journal of Mechanical Engineering 47, no. 22 (2011): 7. http://dx.doi.org/10.3901/jme.2011.22.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Han, J., O. Heo, M. Park, S. Kee, and M. Sunwoo. "Vehicle distance estimation using a mono-camera for FCW/AEB systems." International Journal of Automotive Technology 17, no. 3 (April 30, 2016): 483–91. http://dx.doi.org/10.1007/s12239-016-0050-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Xin, Egor Bondarev, Sander R. Klomp, Joury Zimmerman, and Peter H. N. de With. "Semantic 3D Indoor Reconstruction with Stereo Camera Imaging." Electronic Imaging 2021, no. 18 (January 18, 2021): 105–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.18.3dia-105.

Full text
Abstract:
On-the-fly reconstruction of 3D indoor environments has recently become an important research field to provide situational awareness for first responders, like police and defence officers. The protocols do not allow deployment of active sensors (LiDAR, ToF, IR cameras) to prevent the danger of being exposed. Therefore, passive sensors, such as stereo cameras or moving mono sensors, are the only viable options for 3D reconstruction. At present, even the best portable stereo cameras provide an inaccurate estimation of depth images, caused by the small camera baseline. Reconstructing a complete scene from inaccurate depth images becomes then a challenging task. In this paper, we present a real-time ROS-based system for first responders that performs semantic 3D indoor reconstruction based purely on stereo camera imaging. The major components in the ROS system are depth estimation, semantic segmentation, SLAM and 3D point-cloud filtering. First, we improve the semantic segmentation by training the DeepLab V3+ model [9] with a filtered combination of several publicly available semantic segmentation datasets. Second, we propose and experiment with several noise filtering techniques on both depth images and generated point-clouds. Finally, we embed semantic information into the mapping procedure to achieve an accurate 3D floor plan. The obtained semantic reconstruction provides important clues on the inside structure of an unseen building which can be used for navigation.
APA, Harvard, Vancouver, ISO, and other styles
16

Ullah, Hayat, Osama Zia, Jun Ho Kim, Kyungjin Han, and Jong Weon Lee. "Automatic 360° Mono-Stereo Panorama Generation Using a Cost-Effective Multi-Camera System." Sensors 20, no. 11 (May 30, 2020): 3097. http://dx.doi.org/10.3390/s20113097.

Full text
Abstract:
In recent years, 360° videos have gained the attention of researchers due to their versatility and applications in real-world problems. Also, easy access to different visual sensor kits and easily deployable image acquisition devices have played a vital role in the growth of interest in this area by the research community. Recently, several 360° panorama generation systems have demonstrated reasonable quality generated panoramas. However, these systems are equipped with expensive image sensor networks where multiple cameras are mounted in a circular rig with specific overlapping gaps. In this paper, we propose an economical 360° panorama generation system that generates both mono and stereo panoramas. For mono panorama generation, we present a drone-mounted image acquisition sensor kit that consists of six cameras placed in a circular fashion with optimal overlapping gap. The hardware of our proposed image acquisition system is configured in such way that no user input is required to stitch multiple images. For stereo panorama generation, we propose a lightweight, cost-effective visual sensor kit that uses only three cameras to cover 360° of the surroundings. We also developed stitching software that generates both mono and stereo panoramas using a single image stitching pipeline where the panorama generated by our proposed system is automatically straightened without visible seams. Furthermore, we compared our proposed system with existing mono and stereo contents generation systems in both qualitative and quantitative perspectives, and the comparative measurements obtained verified the effectiveness of our system compared to existing mono and stereo generation systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Dal Cortivo, Davide, Sara Mandelli, Paolo Bestagini, and Stefano Tubaro. "CNN-Based Multi-Modal Camera Model Identification on Video Sequences." Journal of Imaging 7, no. 8 (August 5, 2021): 135. http://dx.doi.org/10.3390/jimaging7080135.

Full text
Abstract:
Identifying the source camera of images and videos has gained significant importance in multimedia forensics. It allows tracing back data to their creator, thus enabling to solve copyright infringement cases and expose the authors of hideous crimes. In this paper, we focus on the problem of camera model identification for video sequences, that is, given a video under analysis, detecting the camera model used for its acquisition. To this purpose, we develop two different CNN-based camera model identification methods, working in a novel multi-modal scenario. Differently from mono-modal methods, which use only the visual or audio information from the investigated video to tackle the identification task, the proposed multi-modal methods jointly exploit audio and visual information. We test our proposed methodologies on the well-known Vision dataset, which collects almost 2000 video sequences belonging to different devices. Experiments are performed, considering native videos directly acquired by their acquisition devices and videos uploaded on social media platforms, such as YouTube and WhatsApp. The achieved results show that the proposed multi-modal approaches significantly outperform their mono-modal counterparts, representing a valuable strategy for the tackled problem and opening future research to even more challenging scenarios.
APA, Harvard, Vancouver, ISO, and other styles
18

Hashimoto, Naoya, Keisuke Yoneda, Ryo Yanase, Mohammad Aldibaja, Naoki Suganuma, and Kei Sato. "Longitudinal Improvement for Self-Localization Based on Mono-Camera and Traffic Signs." International Journal of Automotive Engineering 9, no. 4 (2018): 195–201. http://dx.doi.org/10.20485/jsaeijae.9.4_195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yoneda, Keisuke, Ryo Yanase, Mohammad Aldibaja, Naoki Suganuma, and Kei Sato. "Mono-camera based vehicle localization using lidar intensity map for automated driving." Artificial Life and Robotics 24, no. 2 (October 15, 2018): 147–54. http://dx.doi.org/10.1007/s10015-018-0502-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

HASHIMOTO, Naoya, Keisuke YONEDA, and Naoki SUGANUMA. "Longitudinal Improvement for Self-Localization based on Mono-Camera and Traffic Signs." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2017 (2017): 2P2—C06. http://dx.doi.org/10.1299/jsmermd.2017.2p2-c06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jung, Yong Ju. "Enhancement of low light level images using color-plus-mono dual camera." Optics Express 25, no. 10 (May 12, 2017): 12029. http://dx.doi.org/10.1364/oe.25.012029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Shan, Zeyong, Ruijian Li, and Sören Schwertfeger. "RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots." Sensors 19, no. 10 (May 15, 2019): 2251. http://dx.doi.org/10.3390/s19102251.

Full text
Abstract:
Using camera sensors for ground robot Simultaneous Localization and Mapping (SLAM) has many benefits over laser-based approaches, such as the low cost and higher robustness. RGBD sensors promise the best of both worlds: dense data from cameras with depth information. This paper proposes to fuse RGBD and IMU data for a visual SLAM system, called VINS-RGBD, that is built upon the open source VINS-Mono software. The paper analyses the VINS approach and highlights the observability problems. Then, we extend the VINS-Mono system to make use of the depth data during the initialization process as well as during the VIO (Visual Inertial Odometry) phase. Furthermore, we integrate a mapping system based on subsampled depth data and octree filtering to achieve real-time mapping, including loop closing. We provide the software as well as datasets for evaluation. Our extensive experiments are performed with hand-held, wheeled and tracked robots in different environments. We show that ORB-SLAM2 fails for our application and see that our VINS-RGBD approach is superior to VINS-Mono.
APA, Harvard, Vancouver, ISO, and other styles
23

Ciullo, Vito, Lucile Rossi, and Antoine Pieri. "Experimental Fire Measurement with UAV Multimodal Stereovision." Remote Sensing 12, no. 21 (October 29, 2020): 3546. http://dx.doi.org/10.3390/rs12213546.

Full text
Abstract:
In wildfire research, systems that are able to estimate the geometric characteristics of fire, in order to understand and model the behavior of this spreading and dangerous phenomenon, are required. Over the past decade, there has been a growing interest in the use of computer vision and image processing technologies. The majority of these works have considered multiple mono-camera systems, merging the information obtained from each camera. Recent studies have introduced the use of stereovision in this field; for example, a framework with multiple ground stereo pairs of cameras has been developed to measure fires spreading for about 10 meters. This work proposes an unmanned aerial vehicle multimodal stereovision framework which allows for estimation of the geometric characteristics of fires propagating over long distances. The vision system is composed of two cameras operating simultaneously in the visible and infrared spectral bands. The main result of this work is the development of a portable drone system which is able to obtain georeferenced stereoscopic multimodal images associated with a method for the estimation of fire geometric characteristics. The performance of the proposed system is tested through various experiments, which reveal its efficiency and potential for use in monitoring wildfires.
APA, Harvard, Vancouver, ISO, and other styles
24

Muslikhin, Jenq-Ruey Horng, Szu-Yueh Yang, and Ming-Shyan Wang. "Object Localization and Depth Estimation for Eye-in-Hand Manipulator Using Mono Camera." IEEE Access 8 (2020): 121765–79. http://dx.doi.org/10.1109/access.2020.3006843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Hun, Chul Hong Kim, Tae-Jae Lee, and Dong-Il “Dan” Cho. "Performance Simulation of Various Feature-Initialization Algorithms for Forward-Viewing Mono-Camera-Based SLAM." Journal of Institute of Control, Robotics and Systems 22, no. 10 (October 31, 2016): 833–38. http://dx.doi.org/10.5302/j.icros.2016.16.0096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Knyaz, V. A., and P. V. Moshkantsev. "JOINT GEOMETRIC CALIBRATION OF COLOR AND THERMAL CAMERAS FOR SYNCHRONIZED MULTIMODAL DATASET CREATING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W18 (November 29, 2019): 79–84. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w18-79-2019.

Full text
Abstract:
Abstract. With increasing performance and availability of thermal cameras the number of applications using them in various purposes grows noticeable. Nowadays thermal vision is widely used in industrial control and monitoring, thermal mapping of industrial areas, surveillance and robotics which output huge amount of thermal images. This circumstance creates the necessary basis for applying deep learning which demonstrates the state-of-the-art performance for the most complicated computer vision tasks. Using different modalities for scene analysis allows to outperform results of mono-modal processing, but in case of machine learning it requires synchronized annotated multimodal dataset. The prerequisite condition for such dataset creating is geometric calibration of sensors used for image acquisition. So the purpose of the performed study was to develop a technique for joint calibration of color and long wave infra-red cameras which are to be used for collecting multimodal dataset needed for the tasks of computer vision algorithms developing and evaluating.The paper presents the techniques for camera parameters estimation and experimental evaluation of interior orientation of color and long wave infra-red cameras for further exploiting in datasets collecting. Also the results of geometrically calibrated camera exploiting for 3D reconstruction and 3D model realistic texturing based on visible and thermal imagery are presented. They proved the effectivity of the developed techniques for collecting and augmenting synchronized multimodal imagery dataset for convolutional neural networks model training and evaluating.
APA, Harvard, Vancouver, ISO, and other styles
27

Park, Mingyu, Hyeonseok Kim, Hyukdoo Choi, and Seongkeun Park. "A Study on Vehicle Detection and Distance Classification Using Mono Camera Based on Deep Learning." Journal of Korean Institute of Intelligent Systems 29, no. 2 (April 30, 2019): 90–96. http://dx.doi.org/10.5391/jkiis.2019.29.2.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Shen, Yun De, Dong Soo Cho, Chang Doo Kee, and Zhen Zhe Li. "Tracking Compensation of a Moving Target for a Biped Robot Based on Vision Sensor." Applied Mechanics and Materials 44-47 (December 2010): 788–93. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.788.

Full text
Abstract:
In this paper, the visual tracking algorithm for a moving target is proposed for the biped robot of which camera movement is irregular. Hexagonal Matching Algorithm is used to measure the changes of size, location, and rotation angle for a moving object from its image frame. For enhancing the efficiency of the tracking, we can adaptively adjust the starting point and the size of search area from the image information obtained. Finally, by using Affine Transform and Kalman Filter, the position estimation of the moving target is refined against the swing of the camera. Experiments with 20-DOF biped robot using mono vision sensor are implemented to prove the reliability of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
29

Tchernykh, V., M. Beck, and K. Janschek. "OPTICAL FLOW NAVIGATION FOR AN OUTDOOR UAV USING A WIDE ANGLE MONO CAMERA AND DEM MATCHING." IFAC Proceedings Volumes 39, no. 16 (2006): 590–95. http://dx.doi.org/10.3182/20060912-3-de-2911.00103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Chaofan, Yong Liu, Fan Wang, Yingwei Xia, and Wen Zhang. "VINS-MKF: A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation." Sensors 18, no. 11 (November 19, 2018): 4036. http://dx.doi.org/10.3390/s18114036.

Full text
Abstract:
State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping) to multiple fisheye cameras alongside an inertial measurement unit (IMU) to provide large FOV visual-inertial information. Then, a novel VO framework is proposed to ensure the efficiency of state estimation, by adopting a GPU (Graphics Processing Unit) based feature extraction method and parallelizing the feature extraction thread that is separated from the tracking thread with the mapping thread. Finally, a nonlinear optimization method is formulated for accurate state estimation, which is characterized as being multi-keyframe, tightly-coupled and visual-inertial. In addition, accurate initialization and a novel MultiCol-IMU camera model are coupled to further improve the performance of VINS-MKF. To the best of our knowledge, it’s the first tightly-coupled multi-keyframe visual-inertial odometry that joins measurements from multiple fisheye cameras and IMU. The performance of the VINS-MKF was validated by extensive experiments using home-made datasets, and it showed improved accuracy and robustness over the state-of-art VINS-Mono.
APA, Harvard, Vancouver, ISO, and other styles
31

Hanel, A., A. Mitschke, R. Boerner, D. Van Opdenbosch, L. Hoegner, D. Brodie, and U. Stilla. "METRIC SCALE CALCULATION FOR VISUAL MAPPING ALGORITHMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 433–40. http://dx.doi.org/10.5194/isprs-archives-xlii-2-433-2018.

Full text
Abstract:
Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.
APA, Harvard, Vancouver, ISO, and other styles
32

Yuan, Lei-ming, Jian-rong Cai, Li Sun, and Chuang Ye. "A Preliminary Discrimination of Cluster Disqualified Shape for Table Grape by Mono-Camera Multi-Perspective Simultaneously Imaging Approach." Food Analytical Methods 9, no. 3 (July 10, 2015): 758–67. http://dx.doi.org/10.1007/s12161-015-0250-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Fu, Dong, Hao Xia, and Yanyou Qiao. "Monocular Visual-Inertial Navigation for Dynamic Environment." Remote Sensing 13, no. 9 (April 21, 2021): 1610. http://dx.doi.org/10.3390/rs13091610.

Full text
Abstract:
Simultaneous localization and mapping (SLAM) systems have been generally limited to static environments. Moving objects considerably reduce the location accuracy of SLAM systems, rendering them unsuitable for several applications. Using a combined vision camera and inertial measurement unit (IMU) to separate moving and static objects in dynamic scenes, we improve the location accuracy and adaptability of SLAM systems in these scenes. We develop a moving object-matched feature points elimination algorithm that uses IMU data to eliminate matches on moving objects but retains them on stationary objects. Moreover, we develop a second algorithm to validate the IMU data to avoid erroneous data from influencing image feature points matching. We test the new algorithms with public datasets and in a real-world experiment. In terms of the root mean square error of the location absolute pose error, the proposed method exhibited higher positioning accuracy for the public datasets than the traditional algorithms. Compared with the closed-loop errors obtained by OKVIS-mono and VINS-mono, those obtained in the practical experiment were lower by 50.17% and 56.91%, respectively. Thus, the proposed method eliminates the matching points on moving objects effectively and achieves feature point matching results that are realistic.
APA, Harvard, Vancouver, ISO, and other styles
34

Yu, Junwei, and Zhuoping Yu. "Mono-Vision Based Lateral Localization System of Low-Cost Autonomous Vehicles Using Deep Learning Curb Detection." Actuators 10, no. 3 (March 11, 2021): 57. http://dx.doi.org/10.3390/act10030057.

Full text
Abstract:
The localization system of low-cost autonomous vehicles such as autonomous sweeper requires a highly lateral localization accuracy as the vehicle needs to keep a near lateral-distance between the side brush system and the road curb. Existing methods usually rely on a global navigation satellite system that often loses signal in a cluttered environment such as sweeping streets between high buildings and trees. In a GPS-denied environment, map-based methods are often used such as visual and LiDAR odometry systems. Apart from heavy computation costs from feature extractions, they are too expensive to meet the low-price market of the low-cost autonomous vehicles. To address these issues, we propose a mono-vision based lateral localization system of an autonomous sweeper. Our system relies on a fish-eye camera and precisely detects road curbs with a deep curb detection network. Curbs locations are then referred to as straightforward marks to control the lateral motion of the vehicle. With our self-recorded dataset, our curb detection network achieves 93% pixel-level precision. In addition, experiments are performed with an intelligent sweeper to prove the accuracy and robustness of our proposed approach. Results demonstrate that the average lateral distance error and the maximum invalid rate are within 0.035 m and 9.2%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
35

Sun, Wen. "A Laser Tracking-Vision Guiding Measurement System for Large-Scale Parts Assembly." Advanced Materials Research 718-720 (July 2013): 868–74. http://dx.doi.org/10.4028/www.scientific.net/amr.718-720.868.

Full text
Abstract:
To address the problems of low efficiency, high cost and low automatic level existing in traditional laser tracking measurement systems, a laser tracking-vision guiding measurement system for large-scale parts assembly is introduced in this paper. The system is composed of mono-camera and a laser tracker, can fulfill real-time tracking and automatic measurement throughout the whole assembly process. A global calibration method based on public planes and a method for finding the 3D positions of the target reflectors based on monocular vision measuring are expounded. Mathematical model is established, measurement system is built, and the experiment is accomplished. The experiment result indicates that the proposed system features relative high degree of automation and high measuring speed, has future in practical application.
APA, Harvard, Vancouver, ISO, and other styles
36

Jaramillo-Rojas, Gloria Elena, and John William Branch Bedoya. "Optimized registration based on an ant colony for markerless augmented reality systems." DYNA 87, no. 212 (January 1, 2020): 259–66. http://dx.doi.org/10.15446/dyna.v87n212.84039.

Full text
Abstract:
Accurate registration in augmented reality systems is essential to guarantee the visual consistency of the augmented environment. Although error in the virtual-real alignment is almost unavoidable, different approaches have been proposed to quantify and reduce such errors. However, many of the existing solutions require a lot of a priori information, or they only focus on camera calibration to guarantee good results in the registration. This article presents a heuristic method that aims to reduce registration errors in markerless augmented reality systems. The proposed solution sees error reduction as a mono-objective optimization problem, which is addressed by means of the Ant Colony Optimization (ACO) algorithm. Experimental results reveal the validity of the proposed method, reaching an average error of 1.49 pixels for long video sequences.
APA, Harvard, Vancouver, ISO, and other styles
37

ANSUATEGUI, ANDER, AITOR IBARGUREN, JOSÉ MARÍA MARTÍNEZ-OTZETA, CARLOS TUBÍO, and ELENA LAZKANO. "PARTICLE FILTERING FOR PEOPLE FOLLOWING BEHAVIOR USING LASER SCANS AND STEREO VISION." International Journal on Artificial Intelligence Tools 20, no. 02 (April 2011): 313–26. http://dx.doi.org/10.1142/s0218213011000176.

Full text
Abstract:
Mobile robots have a large application potential in everyday life. To build those applications some common and basic behaviors should be initially consolidated, including a people following behavior. In this paper a system able to follow a person based on information provided by a laser scan and a mono and stereo camera is presented. In order to accomplish this goal, a real-time particle filter system able to merge the information provided by the sensors (laser and 2D and 3D images) and calculate the position of the target is proposed, using probabilistic leg patterns, image features and optical flow to this end. The experiments carried out show promising results, allowing a real-time particle filtering based on two different information sources.
APA, Harvard, Vancouver, ISO, and other styles
38

Zou, Yuan Yuan, Ming Yang Zhao, and Lian Zhu Liu. "Post-Weld Quality Inspection for CO2 Laser Tailored Blank Welding with Vision Sensor." Advanced Materials Research 97-101 (March 2010): 4337–41. http://dx.doi.org/10.4028/www.scientific.net/amr.97-101.4337.

Full text
Abstract:
The traditional method for post-weld quality inspection is visual testing by people. Major limitations of this method are the subjectivity of judgments and time required to perform the inspection. With the increasing use of laser welding in the automotive industry, the manufacturers are struggling with the challenge to automate the quality inspection process. This paper discussed the automatic post-weld quality inspection technology by computer vision sensing for mono-thickness tailored blanks of CO2 laser welding. A visual sensor was developed for acquiring the original image of the weld seam. The sensor consists of a PC based vision camera and a stripe-type laser diode. An image processing algorithm was presented to detect the geometrical defects of the weld seam. Some experiments are carried out and some applications are given.
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, Tae-jae, Byung-moon Jang, and Dong-il “Dan” Cho. "A novel method for estimating the heading angle for a home service robot using a forward-viewing mono-camera and motion sensors." International Journal of Control, Automation and Systems 13, no. 3 (March 28, 2015): 709–17. http://dx.doi.org/10.1007/s12555-014-9111-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Changhui, Zhenbin Liu, and Zengke Li. "Robust Visual-Inertial Navigation System for Low Precision Sensors under Indoor and Outdoor Environments." Remote Sensing 13, no. 4 (February 20, 2021): 772. http://dx.doi.org/10.3390/rs13040772.

Full text
Abstract:
Simultaneous Localization and Mapping (SLAM) has always been the focus of the robot navigation for many decades and becomes a research hotspot in recent years. Because a SLAM system based on vision sensor is vulnerable to environment illumination and texture, the problem of initial scale ambiguity still exists in a monocular SLAM system. The fusion of a monocular camera and an inertial measurement unit (IMU) can effectively solve the scale blur problem, improve the robustness of the system, and achieve higher positioning accuracy. Based on a monocular visual-inertial navigation system (VINS-mono), a state-of-the-art fusion performance of monocular vision and IMU, this paper designs a new initialization scheme that can calculate the acceleration bias as a variable during the initialization process so that it can be applied to low-cost IMU sensors. Besides, in order to obtain better initialization accuracy, visual matching positioning method based on feature point is used to assist the initialization process. After the initialization process, it switches to optical flow tracking visual positioning mode to reduce the calculation complexity. By using the proposed method, the advantages of feature point method and optical flow method can be fused. This paper, the first one to use both the feature point method and optical flow method, has better performance in the comprehensive performance of positioning accuracy and robustness under the low-cost sensors. Through experiments conducted with the EuRoc dataset and campus environment, the results show that the initial values obtained through the initialization process can be efficiently used for launching nonlinear visual-inertial state estimator and positioning accuracy of the improved VINS-mono has been improved by about 10% than VINS-mono.
APA, Harvard, Vancouver, ISO, and other styles
41

Hotta, Norifumi, Tomoyuki Iwata, Takuro Suzuki, and Yuichi Sakai. "The Effects of Particle Segregation on Debris Flow Fluidity Over a Rigid Bed." Environmental and Engineering Geoscience 27, no. 1 (November 18, 2020): 139–49. http://dx.doi.org/10.2113/eeg-d-20-00106.

Full text
Abstract:
ABSTRACT It is essential to consider the fluidity of a debris flow front when calculating its impact. Here we flume-tested mono-granular and bi-granular debris flows and compared the results to those of numerical simulations. We used sand particles with diameters of 0.29 and 0.14 cm at two mixing ratios of 1:1 and 3:7. Particle segregation was recorded with a high-speed video camera. We evaluated the fronts of debris flows at 0.5-second intervals. Then we numerically simulated one-dimensional debris flows under the same conditions and used the mean particle diameter when simulating mixed-diameter flows. For the mono-granular debris flows, the experimental and simulated results showed good agreement in terms of flow depth, front velocity, and flux. However, for the bi-granular debris flows, the simulated flow depth was less, and both the front velocity and flux were greater than those found experimentally. These differences may be attributable to the fact that the dominant shear stress was caused by the concentration of smaller sediment particles in the lower flow layers; such inverse gradations were detected in the debris flow bodies. Under these conditions, most shear stress is supported by smaller particles in the lower layers; the debris flow characteristics become similar to those of mono-granular flows, in contrast to the numerical simulation, which incorporated particle segregation with gradually decreasing mean diameter from the front to the flow body. Consequently, the calculated front velocities were underestimated; particle segregation at the front of the bi-granular debris flows did not affect fluidity either initially or over time.
APA, Harvard, Vancouver, ISO, and other styles
42

Antink, Christoph Hoog, Simon Lyra, Michael Paul, Xinchi Yu, and Steffen Leonhardt. "A Broader Look: Camera-Based Vital Sign Estimation across the Spectrum." Yearbook of Medical Informatics 28, no. 01 (August 2019): 102–14. http://dx.doi.org/10.1055/s-0039-1677914.

Full text
Abstract:
Objectives: Camera-based vital sign estimation allows the contactless assessment of important physiological parameters. Seminal contributions were made in the 1930s, 1980s, and 2000s, and the speed of development seems ever increasing. In this suivey, we aim to overview the most recent works in this area, describe their common features as well as shortcomings, and highlight interesting “outliers”. Methods: We performed a comprehensive literature research and quantitative analysis of papers published between 2016 and 2018. Quantitative information about the number of subjects, studies with healthy volunteers vs. pathological conditions, public datasets, laboratory vs. real-world works, types of camera, usage of machine learning, and spectral properties of data was extracted. Moreover, a qualitative analysis of illumination used and recent advantages in terms of algorithmic developments was also performed. Results: Since 2016, 116 papers were published on camera-based vital sign estimation and 59% of papers presented results on 20 or fewer subjects. While the average number of participants increased from 15.7 in 2016 to 22.9 in 2018, the vast majority of papers (n=100) were on healthy subjects. Four public datasets were used in 10 publications. We found 27 papers whose application scenario could be considered a real-world use case, such as monitoring during exercise or driving. These include 16 papers that dealt with non-healthy subjects. The majority of papers (n=61) presented results based on visual, red-green-blue (RGB) information, followed by RGB combined with other parts of the electromagnetic spectrum (n=18), and thermography only (n=12), while other works (n=25) used other mono- or polychromatic non-RGB data. Surprisingly, a minority of publications (n=39) made use of consumer-grade equipment. Lighting conditions were primarily uncontrolled or ambient. While some works focused on specialized aspects such as the removal of vital sign information from video streams to protect privacy or the influence of video compression, most algorithmic developments were related to three areas: region of interest selection, tracking, or extraction of a one-dimensional signal. Seven papers used deep learning techniques, 17 papers used other machine learning approaches, and 92 made no explicit use of machine learning. Conclusion: Although some general trends and frequent shortcomings are obvious, the spectrum of publications related to camera-based vital sign estimation is broad. While many creative solutions and unique approaches exist, the lack of standardization hinders comparability of these techniques and of their performance. We believe that sharing algorithms and/ or datasets will alleviate this and would allow the application of newer techniques such as deep learning.
APA, Harvard, Vancouver, ISO, and other styles
43

Komjaty, Andrei, Elena Stela Wisznovszky (Muncut), and Lavinia Ioana Culda. "Study on the influence of technological parameters on 3D printing with sla technology." MATEC Web of Conferences 343 (2021): 01003. http://dx.doi.org/10.1051/matecconf/202134301003.

Full text
Abstract:
The influence of the technological parameters of the printing through the SLA (Stereolithography) technology is presented. This printing technology is based on the use of a photosensitive resin, which polymerizes in contact with UV rays with a wavelength of 405nm. An Anycubic Photon Mono printer is used, on which parts are printed, in which the dimensional accuracy and the condition of the resulting surfaces will be analyzed. It will study the influence of the polymerization time of the resin (5s - 10s), of the advance step between the successively deposited layers (0.05mm - 0.2mm), as well as the influence of the placement positioning of the reference mark for printing. A Black basic type, monomer with photo-initiator photosensitive resin will be used made by Anycubic,. A Sony alpha 37 DSLR camera with Sony SAL-100M28 100mm F / 2.8 AF Macro lens will be used to capture images for the resulting surfaces.
APA, Harvard, Vancouver, ISO, and other styles
44

Byun, Ki-hoon, Se-jin Kim, and Jang-woo Kwon. "Development of A Multi-sensor Fusion-based Traffic Information Acquisition System with Robust to Environmental Changes using Mono Camera, Radar and Infrared Range Finder." Journal of The Korea Institute of Intelligent Transport Systems 16, no. 2 (April 30, 2017): 36–54. http://dx.doi.org/10.12815/kits.2017.16.2.36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Minhui, Redmond R. Shamshiri, Michael Schirrmann, and Cornelia Weltzien. "Impact of Camera Viewing Angle for Estimating Leaf Parameters of Wheat Plants from 3D Point Clouds." Agriculture 11, no. 6 (June 20, 2021): 563. http://dx.doi.org/10.3390/agriculture11060563.

Full text
Abstract:
Estimation of plant canopy using low-altitude imagery can help monitor the normal growth status of crops and is highly beneficial for various digital farming applications such as precision crop protection. However, extracting 3D canopy information from raw images requires studying the effect of sensor viewing angle by taking into accounts the limitations of the mobile platform routes inside the field. The main objective of this research was to estimate wheat (Triticum aestivum L.) leaf parameters, including leaf length and width, from the 3D model representation of the plants. For this purpose, experiments with different camera viewing angles were conducted to find the optimum setup of a mono-camera system that would result in the best 3D point clouds. The angle-control analytical study was conducted on a four-row wheat plot with a row spacing of 0.17 m and with two seeding densities and growth stages as factors. Nadir and six oblique view image datasets were acquired from the plot with 88% overlapping and were then reconstructed to point clouds using Structure from Motion (SfM) and Multi-View Stereo (MVS) methods. Point clouds were first categorized into three classes as wheat canopy, soil background, and experimental plot. The wheat canopy class was then used to extract leaf parameters, which were then compared with those values from manual measurements. The comparison between results showed that (i) multiple-view dataset provided the best estimation for leaf length and leaf width, (ii) among the single-view dataset, canopy, and leaf parameters were best modeled with angles vertically at −45° and horizontally at 0° (VA −45, HA 0), while (iii) in nadir view, fewer underlying 3D points were obtained with a missing leaf rate of 70%. It was concluded that oblique imagery is a promising approach to effectively estimate wheat canopy 3D representation with SfM-MVS using a single camera platform for crop monitoring. This study contributes to the improvement of the proximal sensing platform for crop health assessment.
APA, Harvard, Vancouver, ISO, and other styles
46

McIlvenny, Paul. "The future of ‘video’ in video-based qualitative research is not ‘dumb’ flat pixels! Exploring volumetric performance capture and immersive performative replay." Qualitative Research 20, no. 6 (February 22, 2020): 800–818. http://dx.doi.org/10.1177/1468794120905460.

Full text
Abstract:
Qualitative research that focuses on social interaction and talk has been increasingly based, for good reason, on collections of audiovisual recordings in which 2D flat-screen video and mono/stereo audio are the dominant recording media. This article argues that the future of ‘video’ in video-based qualitative studies will move away from ‘dumb’ flat pixels in a 2D screen. Instead, volumetric performance capture and immersive performative replay rely on a procedural camera/spectator-independent representation of a dynamic real or virtual volumetric space over time. It affords analytical practices of re-enactment – shadowing or redoing modes of seeing/listening as an active spectation for ‘another next first time’ – which play on the tense relationships between live performance, observability, spectatorship and documentation. Three examples illustrate how naturally occurring social interaction and settings can be captured volumetrically and re-enacted immersively in virtual reality (VR) and what this means for data integrity, evidential adequacy and qualitative analysis.
APA, Harvard, Vancouver, ISO, and other styles
47

Espada, Yoan, Nicolas Cuperlier, Guillaume Bresson, and Olivier Romain. "From Neurorobotic Localization to Autonomous Vehicles." Unmanned Systems 07, no. 03 (July 2019): 183–94. http://dx.doi.org/10.1142/s2301385019410048.

Full text
Abstract:
The navigation of autonomous vehicles is confronted to the problem of an efficient place recognition system which is able to handle outdoor environments on the long run. The current Simultaneous Localization and Mapping (SLAM) and place recognition solutions have limitations that prevent them from achieving the performances needed for autonomous driving. This paper suggests handling the problem from another perspective by taking inspiration from biological models. We propose a neural architecture for the localization of an autonomous vehicle based on a neurorobotic model of the place cells (PC) found in the hippocampus of mammals. This model is based on an attentional mechanism and only takes into account visual information from a mono-camera and the orientation information to self-localize. It has the advantage to work with low resolution camera without the need of calibration. It also does not need a long learning phase as it uses a one-shot learning system. Such a localization model has already been integrated in a robot control architecture which allows for successful navigation both in indoor and small outdoor environments. The contribution of this paper is to study how it passes the scale change by evaluating the performance of this model over much larger outdoor environments. Eight experiments using real data (image and orientation) grabbed by a moving vehicle are studied (coming from the KITTI odometry datasets and datasets taken with VEDECOM vehicles). Results show the strong adaptability to different kinds of environments of this bio-inspired model primarily developed for indoor navigation.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Shengchang, Christine Campagne, and Fabien Salaün. "Preparation of Electrosprayed Poly(caprolactone) Microparticles Based on Green Solvents and Related Investigations on the Effects of Solution Properties as Well as Operating Parameters." Coatings 9, no. 2 (January 30, 2019): 84. http://dx.doi.org/10.3390/coatings9020084.

Full text
Abstract:
Electrosprayed poly(caprolactone) (PCL) microparticles were produced using five solvents (ethyl acetate, acetone, anisole, glacial acetic acid and chloroform) under different PCL concentrations and operating parameters. Not only green and appropriate solvent for PCL electrospraying was pointed out, but also the effects of solution properties (surface tension, electrical conductivity, viscosity and vapor pressure) and operating parameters (flow rate, working distance and applied voltage) on the formation of electrosprayed particles were clarified. The formation and shape of Taylor cone during electrospraying was observed by high-speed images captured with a camera, and the size and morphology of electrosprayed particles were characterized by optical and scanning electron microscopies. It can conclude that the cone–jet range of applied voltage mainly depended on electrical conductivity, and an ideal Taylor cone was easier to form under high viscosity and low surface tension. Although high electrical conductivity was a contributor to fabricate tiny particles, it was easier to fabricate mono-dispersed microparticles under low electrical conductivity. The poly-dispersed distribution obtained with a high electrical conductivity converted into mono-dispersed distribution with the increasing of viscosity. Furthermore, the size of electrosprayed particles also correlated with the surface tension and vapor pressure of the solvent used. Ethyl acetate, due to mild electrical conductivity and surface tension, moderate viscosity and vapor pressure, is a green and suitable solvent for PCL electrospraying. Single pore PCL microparticles with smooth cherry-like morphology can be prepared from ethyl acetate. Finally, long working distance not only stabilizes the break-up of charged jet, but also promotes the evaporation of solvent.
APA, Harvard, Vancouver, ISO, and other styles
49

Sahoo, Gourishankar, Rita Paikaray, Subrata Samantaray, Dheeren Chandra Patra, Narayan Chandra Sasini, Joydeep Ghosh, Malay Bikash Chowdhuri, and Amulya Sanyasi. "A Compact Plasma System for Experimental Study." Applied Mechanics and Materials 278-280 (January 2013): 90–100. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.90.

Full text
Abstract:
A compact plasma system is set up at Ravenshaw University, India. The plasma system consists of a curved vacuum chamber which is nothing but a part of a toroid (θ=700) having minor radius, r= 0.3 m and major radius, R= 0.5 m, vacuum system, electromagnet, gas injected washer stacked plasma gun to produce plasma blobs/filaments, pulse forming network to energise plasma gun, diagnostic tools like electric probes, magnetic probes, spectrometer, high speed CCD camera, digital pulse/delay generator to synchronise the diagnostic tools. A pair of copper coil is wound over the chamber and capacitive pulse is fed to the coil to produce non-uniform magnetic field inside the chamber. The gas injected washer stacked plasma gun is a mono-anode - multi cathode system having five cathodes made up of brass and an anode made up of copper. The gun impedance is ~ 15 Ω. The pulse forming network (PFN) is Guillemin E type which consists of capacitors having equal capacitance 5.5 μF and inductors having equal inductances 1.5 μH. The pulse width of the PFN is ~ 7.6 μs for a seven stage network, as tested with known resistive circuit. Magnetic probes are designed and calibrated using a Helmholtz coil to map the radial magnetic field profile of the plasma chamber. Electric probes like Langmuir triple probe, velocity probes are designed to measure plasma parameters like blob velocity, density, temperature etc. Emission spectroscopy method is used to identify charged species inside the plasma. High speed CCD camera is used to interpret the structure of the plasma. A digital pulse/trigger generator is used to synchronise the CCD, spectrometer and switching thyristor etc. Preliminary results are also reported.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhao, Huijie, Zefu Xu, Hongzhi Jiang, and Guorui Jia. "SWIR AOTF Imaging Spectrometer Based on Single-pixel Imaging." Sensors 19, no. 2 (January 18, 2019): 390. http://dx.doi.org/10.3390/s19020390.

Full text
Abstract:
An acousto-optic tunable filter (AOTF) is a new type of mono-wavelength generator, and an AOTF imaging spectrometer can obtain spectral images of interest. However, due to the limitation of AOTF aperture and acceptance angle, the light passing through the AOTF imaging spectrometer is weak, especially in the short-wave infrared (SWIR) region. In weak light conditions, the noise of a non-deep cooling mercury cadmium telluride (MCT) detector is high compared to the camera response. Thus, effective spectral images cannot be obtained. In this study, the single-pixel imaging (SPI) technique was applied to the AOTF imaging spectrometer, which can obtain spectral images due to the short-focus lens that collects light into a small area. In our experiment, we proved that the irradiance of a short-focus system is much higher than that of a long-focus system in relation to the AOTF imaging spectrometer. Then, an SPI experimental setup was built to obtain spectral images in which traditional systems cannot obtain. This work provides an efficient way to detect spectral images from 1000 to 2200 nm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography