To see the other types of publications on this topic, follow the link: Image and Sensor Fusion.

Journal articles on the topic 'Image and Sensor Fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image and Sensor Fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Panguluri, Sumanth Kumar, and Laavanya Mohan. "A DWT Based Novel Multimodal Image Fusion Method." Traitement du Signal 38, no. 3 (2021): 607–17. http://dx.doi.org/10.18280/ts.380308.

Full text
Abstract:
Nowadays multimodal image fusion has been majorly utilized as an important processing tool in various image related applications. For capturing useful information different sensors have been developed. Mainly such sensors are infrared (IR) image sensor and visible (VI) image sensor. Fusing both these sensors provides better and accurate scene information. The major application areas where this fused image has been mostly used are military, surveillance, and remote sensing. For better identification of targets and to understand overall scene information, the fused image has to provide better contrast and more edge information. This paper introduces a novel multimodal image fusion method mainly for improving contrast and as well as edge information. Primary step of this algorithm is to resize source images. The 3×3 sharpen filter and morphology hat transform are applied separately on resized IR image and VI image. DWT transform has been used to produce "low-frequency" and "high-frequency" sub-bands. "Filters based mean-weighted fusion rule" and "Filters based max-weighted fusion rule" are newly introduced in this algorithm for combining "low-frequency" sub-bands and "high-frequency" sub-bands respectively. Fused image reconstruction is done with IDWT. Proposed method has outperformed and shown improved results in subjective manner and objectively than similar existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Umeda, Kazunori, Jun Ota, and Hisayuki Kimura. "Fusion of Multiple Ultrasonic Sensor Data and Image Data for Measuring an Object’s Motion." Journal of Robotics and Mechatronics 17, no. 1 (2005): 36–43. http://dx.doi.org/10.20965/jrm.2005.p0036.

Full text
Abstract:
Robot sensing requires two types of observation – intensive and wide-angle. We selected multiple ultrasonic sensors for intensive observation and an image sensor for wide-angle observation in measuring a moving object’s motion with sensors in two kinds of fusion – one fusing multiple ultrasonic sensor data and the other fusing the two types of sensor data. The fusion of multiple ultrasonic sensor data takes advantage of object movement from a measurement range of an ultrasonic sensor to another sensor’s range. They are formulated in a Kalman filter framework. Simulation and experiments demonstrate the effectiveness and applicability to an actual robot system.
APA, Harvard, Vancouver, ISO, and other styles
3

Jittawiriyanukoon, C., and V. Srisarkun. "Evaluation of weighted fusion for scalar images in multi-sensor network." Bulletin of Electrical Engineering and Informatics 10, no. 2 (2021): 911–16. http://dx.doi.org/10.11591/eei.v10i2.1792.

Full text
Abstract:
The regular image fusion method based on scalar has the problem how to prioritize and proportionally enrich image details in multi-sensor network. Based on multiple sensors to fuse and manipulate patterns of computer vision is practical. A fusion (integration) rule, bit-depth conversion, and truncation (due to conflict of size) on the image information are studied. Through multi-sensor images, the fusion rule based on weighted priority is employed to restructure prescriptive details of a fused image. Investigational results confirm that the associated details between multiple images are possibly fused, the prescription is executed and finally, features are improved. Visualization for both spatial and frequency domains to support the image analysis is also presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Praveena, S. Mary, R. Kanmani, and A. K. Kavitha. "A neuro fuzzy image fusion using block based feature level method." International Journal of Informatics and Communication Technology (IJ-ICT) 9, no. 3 (2020): 195. http://dx.doi.org/10.11591/ijict.v9i3.pp195-204.

Full text
Abstract:
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhi-guo, Wei Wang, and Baolin Su. "Multi-sensor Image Fusion Algorithm Based on Multiresolution Analysis." International Journal of Online Engineering (iJOE) 14, no. 06 (2018): 44. http://dx.doi.org/10.3991/ijoe.v14i06.8697.

Full text
Abstract:
<p class="0abstract">To solve the fusion problem of visible and infrared images, based on image fusion algorithm such as region fusion, wavelet transform, spatial frequency, Laplasse Pyramid and principal component analysis, the quality evaluation index of image fusion was defined. Then, curve-let transform was used to replace the wavelet change to express the superiority of the curve. It integrated the intensity channel and the infrared image, and then transformed it to the original space to get the fused color image. Finally, two groups of images at different time intervals were used to carry out experiments, and the images obtained after fusion were compared with the images obtained by the first five algorithms, and the quality was evaluated. The experiment showed that the image fusion algorithm based on curve-let transform had good performance, and it can well integrate the information of visible and infrared images. It is concluded that the image fusion algorithm based on curve-let change is a feasible multi-sensor image fusion algorithm based on multi-resolution analysis. </p>
APA, Harvard, Vancouver, ISO, and other styles
6

Tan, Hai Feng, Wen Jie Zhao, De Jun Li, and Tian Wen Luo. "NSCT-Based Multi-Sensor Image Fusion Algorithm." Applied Mechanics and Materials 347-350 (August 2013): 3212–16. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.3212.

Full text
Abstract:
Against the defects that the favoritism method and average method in the multi-sensor image fusion are apt to impair the image contrast, an image fusion algorithm based on NSCT is proposed. Firstly, this algorithm applied NSCT to the rectified multi-sensor images from the same scene, then different fusion strategies were adopted to fuse the low-frequency and high-frequency directional sub-band coefficients respectively: regional energy adaptive weighted method was used for low-frequency sub-band coefficient; the directional sub-band coefficient adopted a regional-energy-matching program that combined weighted average method and selection method. Finally, the fusion image was obtained by NSCT inverse transformation. Experiments were conducted to IR and visible light image and multi-focus image respectively. And the fusion image was evaluated objectively. The experimental results show that the fusion image obtained through this algorithm has better subjective visual effects and objective quantitative indicators. It is also superior to the traditional fusion method.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Shanshan, Yikun Yang, Xin Jin, Ya Zhang, Qian Jiang, and Shaowen Yao. "Multi-Sensor Image Fusion Using Optimized Support Vector Machine and Multiscale Weighted Principal Component Analysis." Electronics 9, no. 9 (2020): 1531. http://dx.doi.org/10.3390/electronics9091531.

Full text
Abstract:
Multi-sensor image fusion is used to combine the complementary information of source images from the multiple sensors. Recently, conventional image fusion schemes based on signal processing techniques have been studied extensively, and machine learning-based techniques have been introduced into image fusion because of the prominent advantages. In this work, a new multi-sensor image fusion method based on the support vector machine and principal component analysis is proposed. First, the key features of the source images are extracted by combining the sliding window technique and five effective evaluation indicators. Second, a trained support vector machine model is used to extract the focus region and the non-focus region of the source images according to the extracted image features, the fusion decision is therefore obtained for each source image. Then, the consistency verification operation is used to absorb a single singular point in the decisions of the trained classifier. Finally, a novel method based on principal component analysis and the multi-scale sliding window is proposed to handle the disputed areas in the fusion decision pair. Experiments are performed to verify the performance of the new combined method.
APA, Harvard, Vancouver, ISO, and other styles
8

Xiaobing, Zhang, Zhou Wei, and Song Mengfei. "Oil exploration oriented multi-sensor image fusion algorithm." Open Physics 15, no. 1 (2017): 188–96. http://dx.doi.org/10.1515/phys-2017-0020.

Full text
Abstract:
AbstractIn order to accurately forecast the fracture and fracture dominance direction in oil exploration, in this paper, we propose a novel multi-sensor image fusion algorithm. The main innovations of this paper lie in that we introduce Dual-tree complex wavelet transform (DTCWT) in data fusion and divide an image to several regions before image fusion. DTCWT refers to a new type of wavelet transform, and it is designed to solve the problem of signal decomposition and reconstruction based on two parallel transforms of real wavelet. We utilize DTCWT to segment the features of the input images and generate a region map, and then exploit normalized Shannon entropy of a region to design the priority function. To test the effectiveness of our proposed multi-sensor image fusion algorithm, four standard pairs of images are used to construct the dataset. Experimental results demonstrate that the proposed algorithm can achieve high accuracy in multi-sensor image fusion, especially for images of oil exploration.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, C. T., X. Ouyang, W. H. Wong, et al. "Sensor fusion in image reconstruction." IEEE Transactions on Nuclear Science 38, no. 2 (1991): 687–92. http://dx.doi.org/10.1109/23.289375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

FURTADO, Luiz Felipe de Almeida, Thiago Sanna Freire SILVA, Pedro José Farias FERNANDES, and Evelyn Márcia Leão de Moraes NOVO. "Land cover classification of Lago Grande de Curuai floodplain (Amazon, Brazil) using multi-sensor and image fusion techniques." Acta Amazonica 45, no. 2 (2015): 195–202. http://dx.doi.org/10.1590/1809-4392201401439.

Full text
Abstract:
Given the limitations of different types of remote sensing images, automated land-cover classifications of the Amazon várzea may yield poor accuracy indexes. One way to improve accuracy is through the combination of images from different sensors, by either image fusion or multi-sensor classifications. Therefore, the objective of this study was to determine which classification method is more efficient in improving land cover classification accuracies for the Amazon várzea and similar wetland environments - (a) synthetically fused optical and SAR images or (b) multi-sensor classification of paired SAR and optical images. Land cover classifications based on images from a single sensor (Landsat TM or Radarsat-2) are compared with multi-sensor and image fusion classifications. Object-based image analyses (OBIA) and the J.48 data-mining algorithm were used for automated classification, and classification accuracies were assessed using the kappa index of agreement and the recently proposed allocation and quantity disagreement measures. Overall, optical-based classifications had better accuracy than SAR-based classifications. Once both datasets were combined using the multi-sensor approach, there was a 2% decrease in allocation disagreement, as the method was able to overcome part of the limitations present in both images. Accuracy decreased when image fusion methods were used, however. We therefore concluded that the multi-sensor classification method is more appropriate for classifying land cover in the Amazon várzea.
APA, Harvard, Vancouver, ISO, and other styles
11

Pohl, C., and Y. Zeng. "Development of a fusion approach selection tool." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W4 (June 26, 2015): 139–44. http://dx.doi.org/10.5194/isprsarchives-xl-7-w4-139-2015.

Full text
Abstract:
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
APA, Harvard, Vancouver, ISO, and other styles
12

Merianos, Ioannis, and Nikolaos Mitianoudis. "Multiple-Exposure Image Fusion for HDR Image Synthesis Using Learned Analysis Transformations." Journal of Imaging 5, no. 3 (2019): 32. http://dx.doi.org/10.3390/jimaging5030032.

Full text
Abstract:
Modern imaging applications have increased the demand for High-Definition Range (HDR) imaging. Nonetheless, HDR imaging is not easily available with low-cost imaging sensors, since their dynamic range is rather limited. A viable solution to HDR imaging via low-cost imaging sensors is the synthesis of multiple-exposure images. A low-cost sensor can capture the observed scene at multiple-exposure settings and an image-fusion algorithm can combine all these images to form an increased dynamic range image. In this work, two image-fusion methods are combined to tackle multiple-exposure fusion. The luminance channel is fused using the Mitianoudis and Stathaki (2008) method, while the color channels are combined using the method proposed by Mertens et al. (2007). The proposed fusion algorithm performs well without halo artifacts that exist in other state-of-the-art methods. This paper is an extension version of a conference, with more analysis on the derived method and more experimental results that confirm the validity of the method.
APA, Harvard, Vancouver, ISO, and other styles
13

Acho, Leonardo, and Pablo Buenestado. "Image Fusion Based on Principal Component Analysis and Slicing Image Transformation." MATEC Web of Conferences 210 (2018): 04020. http://dx.doi.org/10.1051/matecconf/201821004020.

Full text
Abstract:
Image fusion deals with the ability to integrate data from image sensors at different instants when the source information is uncertain. Although there exist many techniques on the subject, in this paper, we develop two originative techniques based on principal component analysis and slicing image transformation to efficiently fuse a small set of noisy images. For instance, in neural data fusion, this approach requires a considerable number of corrupted images to efficiently produce the desired outcome and also requiring a considerable computing time because of the dynamics involved in the fusion data process. In our approaches, the computation time is considerably smaller. This results appealing to increasing feasibility, for instance, in remote sensing or wireless sensor network. Moreover, and according to our numerical experiments, when our methods are compared against the neural data fusion algorithm, they present better performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Pardhasaradhi, P., B. T PMadhav, G. Lakshmi Sindhuja, K. Sai Sreeram, M. Parvathi, and B. Lokesh. "Image enhancement with contrast coefficients using wavelet based image fusion." International Journal of Engineering & Technology 7, no. 2.8 (2018): 432. http://dx.doi.org/10.14419/ijet.v7i2.8.10476.

Full text
Abstract:
The future is mainly focused on image brightness and the capacity that required storing the image. The sharp images provide better information than the blur images. To overcome from the blurriness in the image, we use image enhancement techniques. Image fusion used to overcome information loss in the image. This paper is provided with image enhancement and fusion by applying wavelet transform technique. Wavelet transform is mainly used because due to its inherent property that is they are redundant and shift invariant. It transforms the image into different scales. Image enhancement will be decided based on the levels of transformation. Low contrast results from poor resolution, lack of dynamic range, wrong settings of sensor lens during acquisition and poor quality of cameras and sensors. To avoid the information loss there is an interesting solution that is for the pictures of the same image but focused on different regions. Then using image fusion concept, all images which are captured are combined to get a single image which contains the properties of both the source images. The image entropy is composed to determine the quality of the image. The paper shows the image fusion method for both multi-resolution and images captured at different temperatures.
APA, Harvard, Vancouver, ISO, and other styles
15

Jabari, S., F. Fathollahi, and Y. Zhang. "APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 153–56. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-153-2017.

Full text
Abstract:
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Feng, and Zhong Ming Pan. "A Security Method for Multi-Sensor Fused Image." Advanced Materials Research 216 (March 2011): 297–300. http://dx.doi.org/10.4028/www.scientific.net/amr.216.297.

Full text
Abstract:
Sensors are used more and more widely today. It can get valuable images through multi-sensor fusion technology. The paper designs an image security method for multi-sensor fused image. It includes keys generation, permutation, diffusion and decryption. Using six decimal numbers it can get three keys in keys generation part. The process of permutation used a new chaotic map to shuffle positions of image pixels. The process of diffusion used classic chaotic map to flat the histogram of the ciphered image. Decryption process is the reverse process of encryption process. The results prove its validity. It also show it can be used in real-time information protect for fused image.
APA, Harvard, Vancouver, ISO, and other styles
17

Cvejic, N., D. R. Bull, and C. N. Canagarajah. "Metric for multimodal image sensor fusion." Electronics Letters 43, no. 2 (2007): 95. http://dx.doi.org/10.1049/el:20073460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chakravortty, S., and P. Subramaniam. "Fusion of Hyperspectral and Multispectral Image Data for Enhancement of Spectral and Spatial Resolution." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (November 28, 2014): 1099–103. http://dx.doi.org/10.5194/isprsarchives-xl-8-1099-2014.

Full text
Abstract:
Hyperspectral image enhancement has been a concern for the remote sensing society for detailed end member detection. Hyperspectral remote sensor collects images in hundreds of narrow, continuous spectral channels, whereas multispectral remote sensor collects images in relatively broader wavelength bands. However, the spatial resolution of the hyperspectral sensor image is comparatively lower than that of the multispectral. As a result, spectral signatures from different end members originate within a pixel, known as mixed pixels. This paper presents an approach for obtaining an image which has the spatial resolution of the multispectral image and spectral resolution of the hyperspectral image, by fusion of hyperspectral and multispectral image. The proposed methodology also addresses the band remapping problem, which arises due to different regions of spectral coverage by multispectral and hyperspectral images. Therefore we apply algorithms to restore the spatial information of the hyperspectral image by fusing hyperspectral bands with only those bands which come under each multispectral band range. The proposed methodology is applied over Henry Island, of the Sunderban eco-geographic province. The data is collected by the Hyperion hyperspectral sensor and LISS IV multispectral sensor.
APA, Harvard, Vancouver, ISO, and other styles
19

Paramanandham, Nirmala, and Kishore Rajendiran. "Multi sensor image fusion for surveillance applications using hybrid image fusion algorithm." Multimedia Tools and Applications 77, no. 10 (2017): 12405–36. http://dx.doi.org/10.1007/s11042-017-4895-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sharma, Arpita, and Samiksha Goel. "Cuckoo Search Based Decision Fusion Techniques for Natural Terrain Understanding." International Journal of Applied Evolutionary Computation 5, no. 2 (2014): 1–21. http://dx.doi.org/10.4018/ijaec.2014040101.

Full text
Abstract:
This paper proposes two novel nature inspired decision level fusion techniques, Cuckoo Search Decision Fusion (CSDF) and Improved Cuckoo Search Decision Fusion (ICSDF) for enhanced and refined extraction of terrain features from remote sensing data. The developed techniques derive their basis from a recently introduced bio-inspired meta-heuristic Cuckoo Search and modify it suitably to be used as a fusion technique. The algorithms are validated on remote sensing satellite images acquired by multispectral sensors namely LISS3 Sensor image of Alwar region in Rajasthan, India and LANDSAT Sensor image of Delhi region, India. Overall accuracies obtained are substantially better than those of the four individual terrain classifiers used for fusion. Results are also compared with majority voting and average weighing policy fusion strategies. A notable achievement of the proposed fusion techniques is that the two difficult to identify terrains namely barren and urban are identified with similar high accuracies as other well identified land cover types, which was not possible by single analyzers.
APA, Harvard, Vancouver, ISO, and other styles
21

Fuse, T., and K. Matsumoto. "Development of a self-localization method using sensors on mobile devices." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 237–42. http://dx.doi.org/10.5194/isprsarchives-xl-5-237-2014.

Full text
Abstract:
Recently, development of high performance CPU, cameras and other sensors on mobile devises have been used for wide variety of applications. Most of the applications require self-localization of the mobile device. Since the self-localization is based on GPS, gyro sensor, acceleration meter and magnetic field sensor (called as POS) of low accuracy, the applications are limited. On the other hand, self-localization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method using sensors, such as POS and cameras, on mobile devices simultaneously. The proposed method mainly consists of two parts: one is the accuracy improvement of POS data in itself by POS sensor fusion based on filtering theory, and another is development of self-localization method by integrating POS and camera. The proposed method combines all POS data by using Kalman filter in order to improve the accuracy of exterior orientation factors. The exterior orientation factors based on POS sensor fusion are used as initial value of ones in image-based self-localization method. The image-based selflocalization method consists of feature points extraction/tracking, coordinates estimation of the feature points, and orientation factors updates of the mobile device. The proposed method is applied to POS data and images taken in urban area. Through experiments with real data, the accuracy improvement by POS sensor fusion is confirmed. The proposed self-localization method with POS and camera make the accuracy more sophisticated by comparing with only POS sensor fusion.
APA, Harvard, Vancouver, ISO, and other styles
22

Pereira, Luciana Escalante, Giancarlo Lastoria, Bruna Semler de Almeida, et al. "APPLICATION OF AERIAL AND ORBITAL SENSOR PHOTOGRAPHS TO IDENTIFY AND DELINEATE WATER BODIES." Boletim de Ciências Geodésicas 23, no. 4 (2017): 591–605. http://dx.doi.org/10.1590/s1982-21702017000400039.

Full text
Abstract:
Abstract: The application of orbital sensors to identify and delineate water bodies was evaluated in this study. Reference aerial photos were used to measure the surface area of three water bodies in São Gabriel do Oeste, MS, Brazil and assess seven sensors commonly used in environmental studies: ALOS-AVNIR, CBERS 2B-CCD, CBERS 2B-HRC, IRS P6-LISS3, LANDSAT-TM, LANDSAT-ETM+, and LANDSAT-OLI. The images were analyzed with the near infrared (NIR) band, and digital processing techniques including image fusion (spatial enhancement), false-color composition, and pre-processed radiometric correction were applied to some sensors. Image fusion and radiometric correction were applied to three sensors; only color composition was not conducted on the HRC sensor. In all water bodies analyzed, images from the CCD sensor showed the greatest values of imprecision, reaching 192% for Water Body #3 without digital processing. Considering the spectral properties of the NIR band, we expected more precise data from the analyses using this spectral range. However, color composite analyses obtained greater percent precision compared with analyses that only used the NIR band.
APA, Harvard, Vancouver, ISO, and other styles
23

Belov, A. M., and A. Y. Denisova. "Earth remote sensing imagery classification using a multi-sensor super-resolution fusion algorithm." Computer Optics 44, no. 4 (2020): 627–35. http://dx.doi.org/10.18287/2412-6179-co-735.

Full text
Abstract:
Earth remote sensing data fusion is intended to produce images of higher quality than the original ones. However, the fusion impact on further thematic processing remains an open question because fusion methods are mostly used to improve the visual data representation. This article addresses an issue of the effect of fusion with increasing spatial and spectral resolution of data on thematic classification of images using various state-of-the-art classifiers and features extraction methods. In this paper, we use our own algorithm to perform multi-frame image fusion over optical remote sensing images with different spatial and spectral resolutions. For classification, we applied support vector machines and Random Forest algorithms. For features, we used spectral channels, extended attribute profiles and local feature attribute profiles. An experimental study was carried out using model images of four imaging systems. The resulting image had a spatial resolution of 2, 3, 4 and 5 times better than for the original images of each imaging system, respectively. As a result of our studies, it was revealed that for the support vector machines method, fusion was inexpedient since excessive spatial details had a negative effect on the classification. For the Random Forest algorithm, the classification results of a fused image were more accurate than for the original low-resolution images in 90% of cases. For example, for images with the smallest difference in spatial resolution (2 times) from the fusion result, the classification accuracy of the fused image was on average 4% higher. In addition, the results obtained for the Random Forest algorithm with fusion were better than the results for the support vector machines method without fusion. Additionally, it was shown that the classification accuracy of a fused image using the Random Forest method could be increased by an average of 9% due to the use of extended attribute profiles as features. Thus, when using data fusion, it is better to use the Random Forest classifier, whereas using fusion with the support vector machines method is not recommended.
APA, Harvard, Vancouver, ISO, and other styles
24

Lee, Changno, and Jaehong Oh. "Rigorous Co-Registration of KOMPSAT-3 Multispectral and Panchromatic Images for Pan-Sharpening Image Fusion." Sensors 20, no. 7 (2020): 2100. http://dx.doi.org/10.3390/s20072100.

Full text
Abstract:
KOMPSAT-3, a Korean earth observing satellite, provides the panchromatic (PAN) band and four multispectral (MS) bands. They can be fused to obtain a pan-sharpened image of higher resolution in both the spectral and spatial domain, which is more informative and interpretative for visual inspection. In KOMPSAT-3 Advanced Earth Imaging Sensor System (AEISS) uni-focal camera system, the precise sensor alignment is a prerequisite for the fusion of MS and PAN images because MS and PAN Charge-Coupled Device (CCD) sensors are installed with certain offsets. In addition, exterior effects associated with the ephemeris and terrain elevation lead to the geometric discrepancy between MS and PAN images. Therefore, we propose a rigorous co-registration of KOMPSAT-3 MS and PAN images based on physical sensor modeling. We evaluated the impacts of CCD line offsets, ephemeris, and terrain elevation on the difference in image coordinates. The analysis enables precise co-registration modeling between MS and PAN images. An experiment with KOMPSAT-3 images produced negligible geometric discrepancy between MS and PAN images.
APA, Harvard, Vancouver, ISO, and other styles
25

H.R, Ramya, and B. K. Sujatha. "Fine grained medical image fusion using type-2 fuzzy logic." Indonesian Journal of Electrical Engineering and Computer Science 14, no. 2 (2019): 999. http://dx.doi.org/10.11591/ijeecs.v14.i2.pp999-1011.

Full text
Abstract:
<p>In recent years, many fast-growing technologies coupled with wide volume of medical data for the digitalization of that data. Thus, researchers have shown their immense interest on Multi-sensor image fusion technologies which convey image information based on data from various sensor modalities into a single image. The image fusion technique is a widespread technique for the diagnosis of medical instrumentation and measurement. Therefore, in this paper we have introduced a novel multimodal sensor medical image fusion method based on type-2 fuzzy logic is proposed using Sugeno model. Moreover, a Gaussian smoothen filter is introduced to extract the detailed information of an image using sharp feature points.Type-2 fuzzy algorithm is used to achieve highly efficient feature points from both the b images to provide high visually classified resultant image. The experimental results demonstrate that the proposed method can achieve better performance than the state-of-the- art methods in terms of visual quality and objective evaluation.</p>
APA, Harvard, Vancouver, ISO, and other styles
26

Sreedhar, P. Siva Satya, and N. Nandhagopal. "Image Fusion-The Pioneering Technique for Real-Time Image Processing Applications." Journal of Computational and Theoretical Nanoscience 18, no. 4 (2021): 1208–12. http://dx.doi.org/10.1166/jctn.2021.9403.

Full text
Abstract:
An image is a two-dimensional function that is expressed through spatial coordinates X, Y. At any pair of coordinates (x, y), the amplitude of a point is called the intensity of that pixel. Digital Image comprises a predictable number of components, each of which has a precise value at a given region. Those components are called pixels. Image Fusion is a phenomenon of transforming data from two or more images of a scenario into a single, more descriptive image taken than both of the input images, and is more appropriate for information processing. Image Fusion (IF) has been utilized in numerous application regions/areas. Remote Sensing Satellites (RSS) produce different images based on their sensory characteristics. Among those images, Panchromatic (PAN) and Multi-Spectral (MS) images are widely used in Satellite Image Fusion (SIF). The Image Fusion (IF) techniques are broadly classified as methods for the Spatial and Frequency domains. Wavelet Fusion Techniques (WFT) based on the Frequency-Domain (FD) are having applications in medical, space, and military applications. This literature delivers a study of some of the Image Fusion (IF) techniques. Remote Sensing Image (RSI) and Data Fusion (DF) seeks to merge the data acquired from sensors installed on satellites, airborne platforms, and ground-based sensors with specific spatial, spectral and temporal resolutions to produce merged data containing more accurate information than is found in each of the individual data sources.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Ming Jing, Yu Bing Dong, and Xiao Li Wang. "Research and Development of Non Multi-Scale to Pixel-Level Image Fusion." Applied Mechanics and Materials 448-453 (October 2013): 3621–24. http://dx.doi.org/10.4028/www.scientific.net/amm.448-453.3621.

Full text
Abstract:
Image fusion method based on the non multi-scale take the original image as object of study, using various fusion rule of image fusion to fuse images, but not decomposition or transform to original images. So, it can also be called simple multi sensor image fusion methods. Its advantages are low computational complexity and simple principle. Image fusion method based on the non multi-scale is currently the most widely used image fusion methods. The basic principle of fuse method is directly to select large gray, small gray and weighted average among pixel on the source image, to fuse into a new image. Simple pixel level image fusion method mainly includes the pixel gray value being average or weighted average, pixel gray value being selected large and pixel gray value being selected small, etc. Basic principle of fusion process was introduced in detail in this paper, and pixel level fusion algorithm at present was summed up. Simulation results on fusion are presented to illustrate the proposed fusion scheme. In practice, fusion algorithm was selected according to imaging characteristics being retained.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Yang, and Zheng Qin. "PCNN-Based Image Fusion in Compressed Domain." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/536215.

Full text
Abstract:
This paper addresses a novel method of image fusion problem for different application scenarios, employing compressive sensing (CS) as the image sparse representation method and pulse-coupled neural network (PCNN) as the fusion rule. Firstly, source images are compressed through scrambled block Hadamard ensemble (SBHE) for its compression capability and computational simplicity on the sensor side. Local standard variance is input to motivate PCNN and coefficients with large firing times are selected as the fusion coefficients in compressed domain. Fusion coefficients are smoothed by sliding window in order to avoid blocking effect. Experimental results demonstrate that the proposed fusion method outperforms other fusion methods in compressed domain and is effective and adaptive in different image fusion applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhou, Ze Hua, and Min Tan. "Infrared Image and Visible Image Fusion Based on Wavelet Transform." Advanced Materials Research 756-759 (September 2013): 2850–56. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2850.

Full text
Abstract:
The same scene, the infrared image and visible image fusion can concurrently take advantage of the original image information can overcome the limitations and differences of a single sensor image in terms of geometric, spectral and spatial resolution, to improve the quality of the image , which help to locate, identify and explain the physical phenomena and events. Put forward a kind of image fusion method based on wavelet transform. And for the wavelet decomposition of the frequency domain, respectively, discussed the principles of select high-frequency coefficients and low frequency coefficients, highlight the contours of parts and the weakening of the details section, fusion, image fusion has the characteristics of two or multiple images, more people or the visual characteristics of the machine, the image for further analysis and understanding, detection and identification or tracking of the target image.
APA, Harvard, Vancouver, ISO, and other styles
30

Dietz, Henry, and Paul Eberhart. "Senscape: modeling and presentation of uncertainty in fused sensor data live image streams." Electronic Imaging 2020, no. 14 (2020): 392–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.14.coimg-392.

Full text
Abstract:
Fusion of data from multiple sensors is a difficult problem. Most recent work centers on techniques that allow image data from multiple similar sources to be aligned and used to improve apparent image quality or field of view. In contrast, the current work centers on modeling and representation of uncertainty in real-time fusion of data from fundamentally dissimilar sensors. Where multiple sensors of differing type, resolution, field of view, and sample rate are providing scene data, the proposed scheme directly models uncertainty and provides an intuitive mechanism for visually representing the time-varying level of confidence in the correctness of fused sensor data producing a live image stream.
APA, Harvard, Vancouver, ISO, and other styles
31

Yuan, Yubin, Yu Shen, Jing Peng, Lin Wang, and Hongguo Zhang. "Defogging Technology Based on Dual-Channel Sensor Information Fusion of Near-Infrared and Visible Light." Journal of Sensors 2020 (November 15, 2020): 1–17. http://dx.doi.org/10.1155/2020/8818650.

Full text
Abstract:
Since the method to remove fog from images is complicated and detail loss and color distortion could occur to the defogged images, a defogging method based on near-infrared and visible image fusion is put forward in this paper. The algorithm in this paper uses the near-infrared image with rich details as a new data source and adopts the image fusion method to obtain a defog image with rich details and high color recovery. First, the colorful visible image is converted into HSI color space to obtain an intensity channel image, color channel image, and saturation channel image. The intensity channel image is fused with a near-infrared image and defogged, and then it is decomposed by Nonsubsampled Shearlet Transform. The obtained high-frequency coefficient is filtered by preserving the edge with a double exponential edge smoothing filter, while low-frequency antisharpening masking treatment is conducted on the low-frequency coefficient. The new intensity channel image could be obtained based on the fusion rule and by reciprocal transformation. Then, in color treatment of the visible image, the degradation model of the saturation image is established, which estimates the parameters based on the principle of dark primary color to obtain the estimated saturation image. Finally, the new intensity channel image, the estimated saturation image, and the primary color image are reflected to RGB space to obtain the fusion image, which is enhanced by color and sharpness correction. In order to prove the effectiveness of the algorithm, the dense fog image and the thin fog image are compared with the popular single image defogging and multiple image defogging algorithms and the visible light-near infrared fusion defogging algorithm based on deep learning. The experimental results show that the proposed algorithm is better in improving the edge contrast and the visual sharpness of the image than the existing high-efficiency defogging method.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Dong, Lu, Lou, and Zhou. "Multi-Sensor Face Registration Based on Global and Local Structures." Applied Sciences 9, no. 21 (2019): 4623. http://dx.doi.org/10.3390/app9214623.

Full text
Abstract:
The work reported in this paper aims at utilizing the global geometrical relationship and local shape feature to register multi-spectral images for fusion-based face recognition. We first propose a multi-spectral face images registration method based on both global and local structures of feature point sets. In order to combine the global geometrical relationship and local shape feature in a new Student’s t Mixture probabilistic model framework. On the one hand, we use inner-distance shape context as the local shape descriptors of feature point sets. On the other hand, we formulate the feature point sets registration of the multi-spectral face images as the Student’s t Mixture probabilistic model estimation, and local shape descriptors are used to replace the mixing proportions of the prior Student’s t Mixture Model. Furthermore, in order to improve the anti-interference performance of face recognition techniques, a guided filtering and gradient preserving image fusion strategy is used to fuse the registered multi-spectral face image. It can make the multi-spectral fusion image hold more apparent details of the visible image and thermal radiation information of the infrared image. Subjective and objective registration experiments are conducted with manual selected landmarks and real multi-spectral face images. The qualitative and quantitative comparisons with the state-of-the-art methods demonstrate the accuracy and robustness of our proposed method in solving the multi-spectral face image registration problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Bagheri, H., M. Schmitt, P. d’Angelo, and X. X. Zhu. "EXPLORING THE APPLICABILITY OF SEMI-GLOBAL MATCHING FOR SAR-OPTICAL STEREOGRAMMETRY OF URBAN SCENES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 43–48. http://dx.doi.org/10.5194/isprs-archives-xlii-2-43-2018.

Full text
Abstract:
Nowadays, a huge archive of data from different satellite sensors is available for diverse objectives. While every new sensor provides data with ever higher resolution and more sophisticated special properties, using the data acquired by only one sensor might sometimes still not be enough. As a result, data fusion techniques can be applied with the aim of jointly exploiting data from multiple sensors. One example is to produce 3D information from optical and SAR imagery by employing stereogrammetric methods. This paper investigates the application of the semi-global matching (SGM) framework for 3D reconstruction from SAR-optical image pairs. For this objective, first a multi-sensor block adjustment is carried out to align the optical image with a corresponding SAR image using an RPC-based formulation of the imaging models. Then, a dense image matching, SGM is implemented to investigate its potential for multi-sensor 3D reconstruction. While the results achieved with Worldview-2 and TerraSAR-X images demonstrate the general feasibility of SARoptical stereogrammetry, they also show the limited applicability of SGM for this task in its out-of-the-box formulation.
APA, Harvard, Vancouver, ISO, and other styles
34

De Silva, Varuna, Jamie Roche, and Ahmet Kondoz. "Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots." Sensors 18, no. 8 (2018): 2730. http://dx.doi.org/10.3390/s18082730.

Full text
Abstract:
Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Hang, Hengyu Li, Jun Luo, Shaorong Xie, and Yu Sun. "Construction of All-in-Focus Images Assisted by Depth Sensing." Sensors 19, no. 6 (2019): 1409. http://dx.doi.org/10.3390/s19061409.

Full text
Abstract:
Multi-focus image fusion is a technique for obtaining an all-in-focus image in which all objects are in focus to extend the limited depth of field (DoF) of an imaging system. Different from traditional RGB-based methods, this paper presents a new multi-focus image fusion method assisted by depth sensing. In this work, a depth sensor is used together with a colour camera to capture images of a scene. A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image. Five test scenes and six evaluation metrics were used to compare the proposed method and representative state-of-the-art algorithms. Experimental results quantitatively demonstrate that this method outperforms existing methods in both speed and quality (in terms of comprehensive fusion metrics). The generated images can potentially be used as reference all-in-focus images.
APA, Harvard, Vancouver, ISO, and other styles
36

Christinal, J. Jenitha, and Jemima Jebaseeli. "A Novel Color Image Fusion for Multi Sensor Night Vision Images." International Journal of Computer Applications Technology and Research 2, no. 2 (2013): 155–59. http://dx.doi.org/10.7753/ijcatr0202.1014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Aria, Amin, Enrique Lopez Droguett, Shapour Azarm, and Mohammad Modarres. "Estimating damage size and remaining useful life in degraded structures using deep learning-based multi-source data fusion." Structural Health Monitoring 19, no. 5 (2019): 1542–59. http://dx.doi.org/10.1177/1475921719890616.

Full text
Abstract:
In this article, a new deep learning-based approach for online estimation of damage size and remaining useful life of structures is presented. The proposed approach consists of three modules. In the first module, a long short-term memory regression model is used to construct a sensor-based estimation of the damage size where different ranges of temporal correlations are considered for their effects on the accuracy of the damage size estimations. In the second module, a convolutional neural network semantic image segmentation approach is used to construct automated damage size estimations in which a pixel-wise classification is carried out on images of the damaged areas. Using physics-of-failure relations, frequency mismatches associated with sensor- and image-based size estimations are resolved. Finally, in the third module, damage size estimations obtained by the first two modules are fused together for an online remaining useful life estimation of the structure. Performance of the proposed approach is evaluated using sensor and image data obtained from a set of fatigue crack experiments performed on aluminum alloy 7075-T6 specimens. It is shown that using acoustic emission signals obtained from sensors and microscopic images in these experiments, the damage size estimations obtained from the proposed data fusion approach have higher accuracy than the sensor-based and higher frequency than the image-based estimations. Moreover, the accuracy of the data fusion estimations is found to be more than that of image-based estimations for the experiment with the largest sensor dataset. Based on the results obtained, it is concluded that the consideration of longer temporal correlations can lead to improvements in the accuracy of crack size estimations and, thus, a better remaining useful life estimation for structures.
APA, Harvard, Vancouver, ISO, and other styles
38

HE, Gui-Qing, Shi-Hao CHEN, Yun TIAN, and Chong-Yang HAO. "Synthesis Performance Evaluation of Multi-Sensor Image Fusion." Chinese Journal of Computers 31, no. 3 (2009): 486–92. http://dx.doi.org/10.3724/sp.j.1016.2008.00486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bavirisetti, Durga Prasad, and Ravindra Dhuli. "Multi Sensor Image Fusion Using Saliency Map Detection." International Review on Computers and Software (IRECOS) 10, no. 7 (2015): 757. http://dx.doi.org/10.15866/irecos.v10i7.6793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Haijiang, Qinke Yang, and Rui Li. "Tunable-Q contourlet-based multi-sensor image fusion." Signal Processing 93, no. 7 (2013): 1879–91. http://dx.doi.org/10.1016/j.sigpro.2012.11.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Meng, Fanjie, Ruixia Shi, Dalong Shan, Yang Song, Wangpeng He, and Weidong Cai. "Multi-sensor image fusion based on regional characteristics." International Journal of Distributed Sensor Networks 13, no. 11 (2017): 155014771774110. http://dx.doi.org/10.1177/1550147717741105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Tong, Ying, and Jin Chen. "Multi-Focus Image Fusion Algorithm in Sensor Networks." IEEE Access 6 (2018): 46794–800. http://dx.doi.org/10.1109/access.2018.2866020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Naidu, V. P. S. "Hybrid DDCT-PCA based multi sensor image fusion." Journal of Optics 43, no. 1 (2013): 48–61. http://dx.doi.org/10.1007/s12596-013-0148-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Rasti, Behnood, and Pedram Ghamisi. "Remote sensing image classification using subspace sensor fusion." Information Fusion 64 (December 2020): 121–30. http://dx.doi.org/10.1016/j.inffus.2020.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhong, Lei, Kazunori OHNO, Eijiro TAKEUCHI, and Satoshi TADOKORO. "1P1-E04 Transparent Object Detection Using Color Image and Laser Reflectance Image for Mobile Manipulator(3D Measurement/Sensor Fusion)." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2011 (2011): _1P1—E04_1—_1P1—E04_4. http://dx.doi.org/10.1299/jsmermd.2011._1p1-e04_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Zhao-Xiang, Guo-Dong Xu, and Jia-Ning Song. "Observation satellite attitude estimation using sensor measurement and image registration fusion." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 232, no. 7 (2017): 1390–402. http://dx.doi.org/10.1177/0954410017691315.

Full text
Abstract:
In order to enhance the accuracy and the robustness of the attitude determination and control system in observation satellites, a new way to fuse gyro and star tracker measurement with image registration is described. In this method, a novel and complete framework is proposed to estimate the on-orbit attitude variations from multi-spectrum remote sensing images. An extended Kalman filter is derived to calibrate the gyro bias drift and the star tracker error. The new framework is tested with realistically simulated data and remote sensing images based on JL-1 satellite. Simulation and experiment results indicate that based on the image registration, the satellite attitude variations could be detected in real time and applied for the accurate gyro and star tracker bias calibration.
APA, Harvard, Vancouver, ISO, and other styles
47

Xie, Liang, Chao Xiang, Zhengxu Yu, et al. "PI-RCNN: An Efficient Multi-Sensor 3D Object Detector with Point-Based Attentive Cont-Conv Fusion Module." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 12460–67. http://dx.doi.org/10.1609/aaai.v34i07.6933.

Full text
Abstract:
LIDAR point clouds and RGB-images are both extremely essential for 3D object detection. So many state-of-the-art 3D detection algorithms dedicate in fusing these two types of data effectively. However, their fusion methods based on Bird's Eye View (BEV) or voxel format are not accurate. In this paper, we propose a novel fusion approach named Point-based Attentive Cont-conv Fusion(PACF) module, which fuses multi-sensor features directly on 3D points. Except for continuous convolution, we additionally add a Point-Pooling and an Attentive Aggregation to make the fused features more expressive. Moreover, based on the PACF module, we propose a 3D multi-sensor multi-task network called Pointcloud-Image RCNN(PI-RCNN as brief), which handles the image segmentation and 3D object detection tasks. PI-RCNN employs a segmentation sub-network to extract full-resolution semantic feature maps from images and then fuses the multi-sensor features via powerful PACF module. Beneficial from the effectiveness of the PACF module and the expressive semantic features from the segmentation module, PI-RCNN can improve much in 3D object detection. We demonstrate the effectiveness of the PACF module and PI-RCNN on the KITTI 3D Detection benchmark, and our method can achieve state-of-the-art on the metric of 3D AP.
APA, Harvard, Vancouver, ISO, and other styles
48

Lebedev, M. A., D. G. Stepaniants, D. V. Komarov, O. V. Vygolov, Yu V. Vizilter, and S. Yu Zheltov. "A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3 (August 11, 2014): 171–75. http://dx.doi.org/10.5194/isprsarchives-xl-3-171-2014.

Full text
Abstract:
The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.
APA, Harvard, Vancouver, ISO, and other styles
49

Hiba, Antal, Attila Gáti, and Augustin Manecy. "Optical Navigation Sensor for Runway Relative Positioning of Aircraft during Final Approach." Sensors 21, no. 6 (2021): 2203. http://dx.doi.org/10.3390/s21062203.

Full text
Abstract:
Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative navigation during final approach is a special case where robust and continuous detection of the runway is required. This paper presents a robust threshold marker detection method for monocular cameras and introduces an on-board real-time implementation with flight test results. Results with narrow and wide field-of-view optics are compared. The image processing approach is also evaluated on image data captured by a different on-board system. The pure optical approach of this paper increases sensor redundancy because it does not require input from an inertial sensor as most of the robust runway detectors.
APA, Harvard, Vancouver, ISO, and other styles
50

Kaimaris, Dimitris, and Aristoteles Kandylas. "Small Multispectral UAV Sensor and Its Image Fusion Capability in Cultural Heritage Applications." Heritage 3, no. 4 (2020): 1046–62. http://dx.doi.org/10.3390/heritage3040057.

Full text
Abstract:
For many decades the multispectral images of the earth’s surface and its objects were taken from multispectral sensors placed on satellites. In recent years, the technological evolution produced similar sensors (much smaller in size and weight) which can be placed on Unmanned Aerial Vehicles (UAVs), thereby allowing the collection of higher spatial resolution multispectral images. In this paper, Parrot’s small Multispectral (MS) camera Sequoia+ is used, and its images are evaluated at two archaeological sites, on the Byzantine wall (ground application) of Thessaloniki city (Greece) and on a mosaic floor (aerial application) at the archaeological site of Dion (Greece). The camera receives RGB and MS images simultaneously, a fact which does not allow image fusion to be performed, as in the standard utilization procedure of Panchromatic (PAN) and MS image of satellite passive systems. In this direction, that is, utilizing the image fusion processes of satellite PAN and MS images, this paper demonstrates that with proper digital processing the images (RGB and MS) of small MS cameras can lead to a fused image with a high spatial resolution, which retains a large percentage of the spectral information of the original MS image. The high percentage of spectral fidelity of the fused images makes it possible to perform high-precision digital measurements in archaeological sites such as the accurate digital separation of the objects, area measurements and retrieval of information not so visible with common RGB sensors via the MS and RGB data of small MS sensors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography