Academic literature on the topic 'Image and Sensor Fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Image and Sensor Fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Image and Sensor Fusion"

1

Panguluri, Sumanth Kumar, and Laavanya Mohan. "A DWT Based Novel Multimodal Image Fusion Method." Traitement du Signal 38, no. 3 (2021): 607–17. http://dx.doi.org/10.18280/ts.380308.

Full text
Abstract:
Nowadays multimodal image fusion has been majorly utilized as an important processing tool in various image related applications. For capturing useful information different sensors have been developed. Mainly such sensors are infrared (IR) image sensor and visible (VI) image sensor. Fusing both these sensors provides better and accurate scene information. The major application areas where this fused image has been mostly used are military, surveillance, and remote sensing. For better identification of targets and to understand overall scene information, the fused image has to provide better contrast and more edge information. This paper introduces a novel multimodal image fusion method mainly for improving contrast and as well as edge information. Primary step of this algorithm is to resize source images. The 3×3 sharpen filter and morphology hat transform are applied separately on resized IR image and VI image. DWT transform has been used to produce "low-frequency" and "high-frequency" sub-bands. "Filters based mean-weighted fusion rule" and "Filters based max-weighted fusion rule" are newly introduced in this algorithm for combining "low-frequency" sub-bands and "high-frequency" sub-bands respectively. Fused image reconstruction is done with IDWT. Proposed method has outperformed and shown improved results in subjective manner and objectively than similar existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Umeda, Kazunori, Jun Ota, and Hisayuki Kimura. "Fusion of Multiple Ultrasonic Sensor Data and Image Data for Measuring an Object’s Motion." Journal of Robotics and Mechatronics 17, no. 1 (2005): 36–43. http://dx.doi.org/10.20965/jrm.2005.p0036.

Full text
Abstract:
Robot sensing requires two types of observation – intensive and wide-angle. We selected multiple ultrasonic sensors for intensive observation and an image sensor for wide-angle observation in measuring a moving object’s motion with sensors in two kinds of fusion – one fusing multiple ultrasonic sensor data and the other fusing the two types of sensor data. The fusion of multiple ultrasonic sensor data takes advantage of object movement from a measurement range of an ultrasonic sensor to another sensor’s range. They are formulated in a Kalman filter framework. Simulation and experiments demonstrate the effectiveness and applicability to an actual robot system.
APA, Harvard, Vancouver, ISO, and other styles
3

Jittawiriyanukoon, C., and V. Srisarkun. "Evaluation of weighted fusion for scalar images in multi-sensor network." Bulletin of Electrical Engineering and Informatics 10, no. 2 (2021): 911–16. http://dx.doi.org/10.11591/eei.v10i2.1792.

Full text
Abstract:
The regular image fusion method based on scalar has the problem how to prioritize and proportionally enrich image details in multi-sensor network. Based on multiple sensors to fuse and manipulate patterns of computer vision is practical. A fusion (integration) rule, bit-depth conversion, and truncation (due to conflict of size) on the image information are studied. Through multi-sensor images, the fusion rule based on weighted priority is employed to restructure prescriptive details of a fused image. Investigational results confirm that the associated details between multiple images are possibly fused, the prescription is executed and finally, features are improved. Visualization for both spatial and frequency domains to support the image analysis is also presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Praveena, S. Mary, R. Kanmani, and A. K. Kavitha. "A neuro fuzzy image fusion using block based feature level method." International Journal of Informatics and Communication Technology (IJ-ICT) 9, no. 3 (2020): 195. http://dx.doi.org/10.11591/ijict.v9i3.pp195-204.

Full text
Abstract:
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhi-guo, Wei Wang, and Baolin Su. "Multi-sensor Image Fusion Algorithm Based on Multiresolution Analysis." International Journal of Online Engineering (iJOE) 14, no. 06 (2018): 44. http://dx.doi.org/10.3991/ijoe.v14i06.8697.

Full text
Abstract:
<p class="0abstract">To solve the fusion problem of visible and infrared images, based on image fusion algorithm such as region fusion, wavelet transform, spatial frequency, Laplasse Pyramid and principal component analysis, the quality evaluation index of image fusion was defined. Then, curve-let transform was used to replace the wavelet change to express the superiority of the curve. It integrated the intensity channel and the infrared image, and then transformed it to the original space to get the fused color image. Finally, two groups of images at different time intervals were used to carry out experiments, and the images obtained after fusion were compared with the images obtained by the first five algorithms, and the quality was evaluated. The experiment showed that the image fusion algorithm based on curve-let transform had good performance, and it can well integrate the information of visible and infrared images. It is concluded that the image fusion algorithm based on curve-let change is a feasible multi-sensor image fusion algorithm based on multi-resolution analysis. </p>
APA, Harvard, Vancouver, ISO, and other styles
6

Tan, Hai Feng, Wen Jie Zhao, De Jun Li, and Tian Wen Luo. "NSCT-Based Multi-Sensor Image Fusion Algorithm." Applied Mechanics and Materials 347-350 (August 2013): 3212–16. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.3212.

Full text
Abstract:
Against the defects that the favoritism method and average method in the multi-sensor image fusion are apt to impair the image contrast, an image fusion algorithm based on NSCT is proposed. Firstly, this algorithm applied NSCT to the rectified multi-sensor images from the same scene, then different fusion strategies were adopted to fuse the low-frequency and high-frequency directional sub-band coefficients respectively: regional energy adaptive weighted method was used for low-frequency sub-band coefficient; the directional sub-band coefficient adopted a regional-energy-matching program that combined weighted average method and selection method. Finally, the fusion image was obtained by NSCT inverse transformation. Experiments were conducted to IR and visible light image and multi-focus image respectively. And the fusion image was evaluated objectively. The experimental results show that the fusion image obtained through this algorithm has better subjective visual effects and objective quantitative indicators. It is also superior to the traditional fusion method.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Shanshan, Yikun Yang, Xin Jin, Ya Zhang, Qian Jiang, and Shaowen Yao. "Multi-Sensor Image Fusion Using Optimized Support Vector Machine and Multiscale Weighted Principal Component Analysis." Electronics 9, no. 9 (2020): 1531. http://dx.doi.org/10.3390/electronics9091531.

Full text
Abstract:
Multi-sensor image fusion is used to combine the complementary information of source images from the multiple sensors. Recently, conventional image fusion schemes based on signal processing techniques have been studied extensively, and machine learning-based techniques have been introduced into image fusion because of the prominent advantages. In this work, a new multi-sensor image fusion method based on the support vector machine and principal component analysis is proposed. First, the key features of the source images are extracted by combining the sliding window technique and five effective evaluation indicators. Second, a trained support vector machine model is used to extract the focus region and the non-focus region of the source images according to the extracted image features, the fusion decision is therefore obtained for each source image. Then, the consistency verification operation is used to absorb a single singular point in the decisions of the trained classifier. Finally, a novel method based on principal component analysis and the multi-scale sliding window is proposed to handle the disputed areas in the fusion decision pair. Experiments are performed to verify the performance of the new combined method.
APA, Harvard, Vancouver, ISO, and other styles
8

Xiaobing, Zhang, Zhou Wei, and Song Mengfei. "Oil exploration oriented multi-sensor image fusion algorithm." Open Physics 15, no. 1 (2017): 188–96. http://dx.doi.org/10.1515/phys-2017-0020.

Full text
Abstract:
AbstractIn order to accurately forecast the fracture and fracture dominance direction in oil exploration, in this paper, we propose a novel multi-sensor image fusion algorithm. The main innovations of this paper lie in that we introduce Dual-tree complex wavelet transform (DTCWT) in data fusion and divide an image to several regions before image fusion. DTCWT refers to a new type of wavelet transform, and it is designed to solve the problem of signal decomposition and reconstruction based on two parallel transforms of real wavelet. We utilize DTCWT to segment the features of the input images and generate a region map, and then exploit normalized Shannon entropy of a region to design the priority function. To test the effectiveness of our proposed multi-sensor image fusion algorithm, four standard pairs of images are used to construct the dataset. Experimental results demonstrate that the proposed algorithm can achieve high accuracy in multi-sensor image fusion, especially for images of oil exploration.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, C. T., X. Ouyang, W. H. Wong, et al. "Sensor fusion in image reconstruction." IEEE Transactions on Nuclear Science 38, no. 2 (1991): 687–92. http://dx.doi.org/10.1109/23.289375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

FURTADO, Luiz Felipe de Almeida, Thiago Sanna Freire SILVA, Pedro José Farias FERNANDES, and Evelyn Márcia Leão de Moraes NOVO. "Land cover classification of Lago Grande de Curuai floodplain (Amazon, Brazil) using multi-sensor and image fusion techniques." Acta Amazonica 45, no. 2 (2015): 195–202. http://dx.doi.org/10.1590/1809-4392201401439.

Full text
Abstract:
Given the limitations of different types of remote sensing images, automated land-cover classifications of the Amazon várzea may yield poor accuracy indexes. One way to improve accuracy is through the combination of images from different sensors, by either image fusion or multi-sensor classifications. Therefore, the objective of this study was to determine which classification method is more efficient in improving land cover classification accuracies for the Amazon várzea and similar wetland environments - (a) synthetically fused optical and SAR images or (b) multi-sensor classification of paired SAR and optical images. Land cover classifications based on images from a single sensor (Landsat TM or Radarsat-2) are compared with multi-sensor and image fusion classifications. Object-based image analyses (OBIA) and the J.48 data-mining algorithm were used for automated classification, and classification accuracies were assessed using the kappa index of agreement and the recently proposed allocation and quantity disagreement measures. Overall, optical-based classifications had better accuracy than SAR-based classifications. Once both datasets were combined using the multi-sensor approach, there was a 2% decrease in allocation disagreement, as the method was able to overcome part of the limitations present in both images. Accuracy decreased when image fusion methods were used, however. We therefore concluded that the multi-sensor classification method is more appropriate for classifying land cover in the Amazon várzea.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Image and Sensor Fusion"

1

Crow, Mason W. "Multiple sensor credit apportionment." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Jun%5FCrow.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chow, Khin Choong. "Fusion of images from Dissimilar Sensor systems /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FChow.pdf.

Full text
Abstract:
Thesis (M.S. in Combat Systems Technology)--Naval Postgraduate School, Dec. 2004.<br>Thesis Advisor(s): Monique P. Fargues, Alfred W. Cooper. Includes bibliographical references (p. 73-75). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
3

Fallah, Haghmohammadi Hamidreza. "Fever Detection for Dynamic Human Environment Using Sensor Fusion." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37332.

Full text
Abstract:
The objective of this thesis is to present an algorithm for processing infrared images and accomplishing automatic detection and path tracking of moving subjects with fever. The detection is based on two main features: the distinction between the geometry of a human face and other objects in the field of view of the camera and the temperature of the radiating object. These features are used for tracking the identified person with fever. The position of camera with respect to direction of motion the walkers appeared to be critical in this process. Infrared thermography is a remote sensing technique used to measure temperatures based on emitted infrared radiation. This application may be used for fever screening in major public places such as airports and hospitals. For this study, we first look at human body and objects in a line of view with different temperatures that would be higher than the normal human body temperature (37.8C at morning and 38.3C at evening). As a part of the experimental study, two humans with different body temperatures walking a path were subjected to automatic fever detection applied for tracking the detected human with fever. The algorithm consists of image processing to threshold objects based on the temperature and template matching used for fever detection in a dynamic human environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Björklund, Emil, and Johan Hjorth. "Towards Reliable Computer Vision in Aviation: An Evaluation of Sensor Fusion and Quality Assessment." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48597.

Full text
Abstract:
Research conducted in the aviation industry includes two major areas, increased safety and a reduction of the environmental footprint. This thesis investigates the possibilities of increased situational awareness with computer vision in avionics systems. Image fusion methods are evaluated with appropriate pre-processing of three image sensors, one in the visual spectrum and two in the infra-red spectrum. The sensor setup is chosen to cope with the different weather and operational conditions of an aircraft, with a focus on the final approach and landing phases. Extensive image quality assessment metrics derived from a systematic review is applied to provide a precise evaluation of the image quality of the fusion methods. A total of four image fusion methods are evaluated, where two are convolutional network-based, using the networks for feature extraction in the detailed layers. Other approaches with visual saliency maps and sparse representation are also evaluated. With methods implemented in MATLAB, results show that a conventional method implementing a rolling guidance filter for layer separation and visual saliency map provides the best results. The results are further confirmed with a subjective ranking test, where the image quality of the fusion methods is evaluated further.
APA, Harvard, Vancouver, ISO, and other styles
5

Persson, Martin. "Semantic Mapping using Virtual Sensors and Fusion of Aerial Images with Sensor Data from a Ground Vehicle." Doctoral thesis, Örebro : Örebro University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-2186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Matsumoto, Takeshi, and takeshi matsumoto@flinders edu au. "Real-Time Multi-Sensor Localisation and Mapping Algorithms for Mobile Robots." Flinders University. Computer Science, Engineering and Mathematics, 2010. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20100302.131127.

Full text
Abstract:
A mobile robot system provides a grounded platform for a wide variety of interactive systems to be developed and deployed. The mobility provided by the robot presents unique challenges as it must observe the state of the surroundings while observing the state of itself with respect to the environment. The scope of the discipline includes the mechanical and hardware issues, which limit and direct the capabilities of the software considerations. The systems that are integrated into the mobile robot platform include both specific task oriented and fundamental modules that define the core behaviour of the robot. While the earlier can sometimes be developed separately and integrated at a later stage, the core modules are often custom designed early on to suit the individual robot system depending on the configuration of the mechanical components. This thesis covers the issues encountered and the resolutions that were implemented during the development of a low cost mobile robot platform using off the shelf sensors, with a particular focus on the algorithmic side of the system. The incrementally developed modules target the localisation and mapping aspects by incorporating a number of different sensors to gather the information of the surroundings from different perspectives by simultaneously or sequentially combining the measurements to disambiguate and support each other. Although there is a heavy focus on the image processing techniques, the integration with the other sensors and the characteristics of the platform itself are included in the designs and analyses of the core and interactive modules. A visual odometry technique is implemented for the localisation module, which includes calibration processes, feature tracking, synchronisation between multiple sensors, as well as short and long term landmark identification to calculate the relative pose of the robot in real time. The mapping module considers the interpretation and the representation of sensor readings to simplify and hasten the interactions between multiple sensors, while selecting the appropriate attributes and characteristics to construct a multi-attributed model of the environment. The modules that are developed are applied to realistic indoor scenarios, which are taken into consideration in some of the algorithms to enhance the performance through known constraints. As the performance of algorithms depends significantly on the hardware, the environment, and the number of concurrently running sensors and modules, comparisons are made against various implementations that have been developed throughout the project.
APA, Harvard, Vancouver, ISO, and other styles
7

Fox, Elizabeth Lynn. "Cognitive Analysis of Multi-sensor Information." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1435681970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rajan, Krithika. "Analysis of pavement condition data employing Principal Component Analysis and sensor fusion techniques." Thesis, Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ciambrone, Andrew James. "Environment Mapping in Larger Spaces." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/74984.

Full text
Abstract:
Spatial mapping or environment mapping is the process of exploring a real world environment and creating its digital representation. To create convincing mixed reality programs, an environment mapping device must be able to detect a user's position and map the user's environment. Currently available commercial spatial mapping devices mostly use infrared camera to obtain a depth map which is effective only for short to medium distances (3-4 meters). This work describes an extension to the existing environment mapping devices and techniques to enable mapping of larger architectural environments using a combination of a camera, Inertial Measurement Unit (IMU), and Light Detection and Ranging (LIDAR) devices supported by sensor fusion and computer vision techniques. There are three main parts to the proposed system. The first part is data collection and data fusion using embedded hardware, the second part is data processing (segmentation) and the third part is creating a geometry mesh of the environment. The developed system was evaluated against its ability to determine the dimension of the room and of objects within the room. This low cost system can significantly expand the mapping range of the existing mixed reality devices such as Microsoft HoloLens device.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Karvir, Hrishikesh. "Design and Validation of a Sensor Integration and Feature Fusion Test-Bed for Image-Based Pattern Recognition Applications." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1291753291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Image and Sensor Fusion"

1

Mitchell, H. B. Image Fusion: Theories, Techniques and Applications. Springer-Verlag Berlin Heidelberg, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Clark, James J. Data Fusion for Sensory Information Processing Systems. Springer US, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abdelgawad, Ahmed. Resource-Aware Data Fusion Algorithms for Wireless Sensor Networks. Springer US, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mitchell, H. B. Data Fusion: Concepts and Ideas. 2nd ed. Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Clark, James Joseph. Data fusion for sensory information processing systems. Kluwer Academic Publishers, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Shengyong. Active Sensor Planning for Multiview Vision Tasks. Springer-Verlag Berlin Heidelberg, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mitchell, H. B. Image Fusion. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11216-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xiao, Gang, Durga Prasad Bavirisetti, Gang Liu, and Xingchen Zhang. Image Fusion. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4867-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chaudhuri, Subhasis. Hyperspectral Image Fusion. Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chaudhuri, Subhasis, and Ketan Kotwal. Hyperspectral Image Fusion. Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-7470-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Image and Sensor Fusion"

1

Xiao, Gang, Durga Prasad Bavirisetti, Gang Liu, and Xingchen Zhang. "Multi-sensor Dynamic Image Fusion." In Image Fusion. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4867-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mitchell, H. B. "Image Sensors." In Image Fusion. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11216-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Dingbing, Aolei Yang, Lingling Zhu, and Chi Zhang. "Survey of Multi-sensor Image Fusion." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45283-7_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ciążyński, Karol, and Artur Sierszeń. "Sensor Fusion Enhancement for Mobile Positioning Systems." In Image Processing and Communications Challenges 7. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23814-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tian, Tian, and Bin Zhang. "A Haze Removal Method Based on Additional Depth Information and Image Fusion." In Sensor Networks and Signal Processing. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4917-5_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Almasri, Feras, and Olivier Debeir. "Multimodal Sensor Fusion in Single Thermal Image Super-Resolution." In Computer Vision – ACCV 2018 Workshops. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21074-8_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jain, Shruti, Mohit Sachdeva, Parth Dubey, and Anish Vijan. "Multi-sensor Image Fusion Using Intensity Hue Saturation Technique." In Communications in Computer and Information Science. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0111-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aiazzi, B., L. Alparone, S. Baronti, V. Cappellini, R. Carlà, and L. Mortelli. "Pyramid-based multi-sensor image data fusion with enhancement of textural features." In Image Analysis and Processing. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63507-6_188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Williams, Mark L., Richard C. Wilson, and Edwin R. Hancock. "Multi-sensor fusion with Bayesian inference." In Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_96.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chek, Terence, Hion Heng, Yoshinori Kuno, and Yoshiaki Shirai. "Combination of active sensing and sensor fusion for collision avoidance in mobile robots." In Image Analysis and Processing. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63508-4_169.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Image and Sensor Fusion"

1

Di Santo, Simone, Nadege Bize-Forest, Isabelle Le Nir, and Carlos Maeso. "WELLBORE IMAGES DIGITAL FUSION: BEYOND SINGLE-SENSOR PHYSICAL CONSTRAINTS." In 2021 SPWLA 62nd Annual Logging Symposium Online. Society of Petrophysicists and Well Log Analysts, 2021. http://dx.doi.org/10.30632/spwla-2021-0007.

Full text
Abstract:
In the modern oilfield, borehole images can be considered as the minimally representative element of any well-planned geological model/interpretation. In the same borehole it is common to acquire multiple images using different physics and/or resolutions. The challenge for any petro-technical expert is to extract detailed information from several images simultaneously without losing the petrophysical information of the formation. This work shows an innovative approach to combine several borehole images into one new multi-dimensional fused and high-resolution image that allows, at a glance, a petrophysical and geological qualitative interpretation while maintaining quantitative measurement properties. The new image is created by applying color mathematics and advanced image fusion techniques: At the first stage low resolution LWD nuclear images are merged into one multichannel or multiphysics image that integrates all petrophysical measurement’s information of each single input image. A specific transfer function was developed, it normalizes the input measurements into color intensity that, combined into an RGB (red-green-blue) color space, is visualized as a full-color image. The strong and bilateral connection between measurements and colors enables processing that can be used to produce ad-hoc secondary images. In a second stage the multiphysics image resolution is increased by applying a specific type of image fusion: Pansharpening. The goal is to inject details and texture present in a high-resolution image into the low resolution multiphysics image without compromising the petrophysical measurements. The pansharpening algorithm was especially developed for the borehole images application and compared with other established sharpening methods. The resulting high-resolution multiphysics image integrates all input measurements in the form of RGB colors and the texture from the high-resolution image. The image fusion workflow has been tested using LWD GR, density, photo-electric factor images and a high-resolution resistivity image. Image fusion is an innovative method that extends beyond physical constraints of single sensors: the result is a unique image dataset that contains simultaneously geological and petrophysical information at the highest resolution. This work will also give examples of applications of the new fused image.
APA, Harvard, Vancouver, ISO, and other styles
2

Palubinskas, Gintautas, and Peter Reinartz. "Multi-resolution, multi-sensor image fusion: general fusion framework." In 2011 Joint Urban Remote Sensing Event (JURSE). IEEE, 2011. http://dx.doi.org/10.1109/jurse.2011.5764782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hornegger, Joachim. "Sensor Data Fusion and Image Registration." In Imaging Systems and Applications. OSA, 2013. http://dx.doi.org/10.1364/isa.2013.iw1e.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liang, Jing, and Qilian Liang. "Image fusion on radar sensor networks." In Welcome to Mobile Content Quality of Experience. ACM Press, 2007. http://dx.doi.org/10.1145/1577504.1577507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Analysis of Multi-sensor Image Fusion." In 2018 5th International Conference on Electrical & Electronics Engineering and Computer Science. Francis Academic Press, 2018. http://dx.doi.org/10.25236/iceeecs.2018.070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bavirisetti, Durga Prasad, Gang Xiao, and Gang Liu. "Multi-sensor image fusion based on fourth order partial differential equations." In 2017 20th International Conference on Information Fusion (Fusion). IEEE, 2017. http://dx.doi.org/10.23919/icif.2017.8009719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lavely, E. M., and E. P. Blasch. "Sensor model appraisal for image registration." In 2005 7th International Conference on Information Fusion. IEEE, 2005. http://dx.doi.org/10.1109/icif.2005.1591882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pavlovic, R., and V. Petrovic. "Multisensor colour image fusion for night vision." In Sensor Signal Processing for Defence (SSPD 2012). Institution of Engineering and Technology, 2012. http://dx.doi.org/10.1049/ic.2012.0107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thomopoulos, Stelios C. A. "Sensor Integration And Data Fusion." In 1989 Symposium on Visual Communications, Image Processing, and Intelligent Robotics Systems, edited by Paul S. Schenker. SPIE, 1990. http://dx.doi.org/10.1117/12.969974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sawada, K., K. Takahashi, T. Iwata, and T. Hattori. "CMOS based ion image sensor — fusion of bio sensor technology and image sensor technology." In 2017 19th International Conference on Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS). IEEE, 2017. http://dx.doi.org/10.1109/transducers.2017.7994005.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Image and Sensor Fusion"

1

Wolff, Lawrence B. Differential Geometric Tools for Image Sensor Fusion. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada386912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Willsky, Alan S. Multiresolution, Geometric, and Learning Methods in Statistical Image Processing, Object Recognition, and Sensor Fusion. Defense Technical Information Center, 2004. http://dx.doi.org/10.21236/ada425745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pothitos, Michail. Multi-Sensor Image Fusion for Target Recognition in the Environment of Network Decision Support Systems. Defense Technical Information Center, 2015. http://dx.doi.org/10.21236/ad1009200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Garg, Devendra P., and Manish Kumar. Sensor Modeling and Multi-Sensor Data Fusion. Defense Technical Information Center, 2005. http://dx.doi.org/10.21236/ada440553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Akita, Richard, Robert Pap, and Joel Davis. Biologically Inspired Sensor Fusion. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada389747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meitzler, Thomas J., David Bednarz, E. J. Sohn, Kimberly Lane, and Darryl Bryk. Fuzzy Logic Based Image Fusion. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada405123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baim, Paul. Dynamic Database for Sensor Fusion. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada363915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hero, III, Raich Alfred O., and Raviv. Performance-driven Multimodality Sensor Fusion. Defense Technical Information Center, 2012. http://dx.doi.org/10.21236/ada565491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

ROCKWELL INTERNATIONAL ANAHEIM CA. Multi-Sensor Feature Level Fusion. Defense Technical Information Center, 1991. http://dx.doi.org/10.21236/ada237106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Meyer, David, and Jeffrey Remmel. Distributed Algorithms for Sensor Fusion. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada415039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography