To see the other types of publications on this topic, follow the link: Multispectral.

Journal articles on the topic 'Multispectral'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multispectral.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sun, Mingyue, Qian Li, Xuzi Jiang, Tiantian Ye, Xinju Li, and Beibei Niu. "Estimation of Soil Salt Content and Organic Matter on Arable Land in the Yellow River Delta by Combining UAV Hyperspectral and Landsat-8 Multispectral Imagery." Sensors 22, no. 11 (May 25, 2022): 3990. http://dx.doi.org/10.3390/s22113990.

Full text
Abstract:
Rapid and large-scale estimation of soil salt content (SSC) and organic matter (SOM) using multi-source remote sensing is of great significance for the real-time monitoring of arable land quality. In this study, we simultaneously predicted SSC and SOM on arable land in the Yellow River Delta (YRD), based on ground measurement data, unmanned aerial vehicle (UAV) hyperspectral imagery, and Landsat-8 multispectral imagery. The reflectance averaging method was used to resample UAV hyperspectra to simulate the Landsat-8 OLI data (referred to as fitted multispectra). Correlation analyses and the multiple regression method were used to construct SSC and SOM hyperspectral/fitted multispectral estimation models. Then, the best SSC and SOM fitted multispectral estimation models based on UAV images were applied to a reflectance-corrected Landsat-8 image, and SSC and SOM distributions were obtained for the YRD. The estimation results revealed that moderately salinized arable land accounted for the largest proportion of area in the YRD (48.44%), with the SOM of most arable land (60.31%) at medium or lower levels. A significant negative spatial correlation was detected between SSC and SOM in most regions. This study integrates the advantages of UAV hyperspectral and satellite multispectral data, thereby realizing rapid and accurate estimation of SSC and SOM for a large-scale area, which is of great significance for the targeted improvement of arable land in the YRD.
APA, Harvard, Vancouver, ISO, and other styles
2

Mansfield, J. R. "Multispectral Imaging." Veterinary Pathology 51, no. 1 (October 15, 2013): 185–210. http://dx.doi.org/10.1177/0300985813506918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Fangyu, Jie Qi, Bin Lyu, and Kurt J. Marfurt. "Multispectral coherence." Interpretation 6, no. 1 (February 1, 2018): T61—T69. http://dx.doi.org/10.1190/int-2017-0112.1.

Full text
Abstract:
Seismic coherence is a routine measure of seismic reflection similarity for interpreters seeking structural boundary and discontinuity features that may be not properly highlighted on original amplitude volumes. One mostly wishes to use the broadest band seismic data for interpretation. However, because of thickness tuning effects, spectral components of specific frequencies can highlight features of certain thicknesses with higher signal-to-noise ratio than others. Seismic stratigraphic features (e.g., channels) may be buried in the full-bandwidth data, but can be “lit up” at certain spectral components. For the same reason, coherence attributes computed from spectral voice components (equivalent to a filter bank) also often provide sharper images, with the “best” component being a function of the tuning thickness and the reflector alignment across faults. Although one can corender three coherence images using red-green-blue (RGB) blending, a display of the information contained in more than three volumes in a single image is difficult. We address this problem by combining covariance matrices for each spectral component, adding them together, resulting in a “multispectral” coherence algorithm. The multispectral coherence images provide better images of channel incisement, and they are less noisy than those computed from the full bandwidth data. In addition, multispectral coherence also provides a significant advantage over RGB blended volumes. The information content from unlimited spectral voices can be combined into one volume, which is useful for a posteriori/further processing, such as color corendering display with other related attributes, such as petrophysics parameters plotted against a polychromatic color bar. We develop the value of multispectral coherence by comparing it with the RGB blended volumes and coherence computed from spectrally balanced, full-bandwidth seismic amplitude volume from a megamerge survey acquired over the Red Fork Formation of the Anadarko Basin, Oklahoma.
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Xiaohua, Xiaoxiao Zhang, Ming Liu, and Jie Tian. "Joint Panchromatic and Multispectral Geometric Calibration Method for the DS-1 Satellite." Remote Sensing 16, no. 2 (January 22, 2024): 433. http://dx.doi.org/10.3390/rs16020433.

Full text
Abstract:
The DS-1 satellite was launched successfully on 3 June 2021 from the Taiyuan Satellite Launch Center. The satellite is equipped with a 1 m panchromatic and a 4 m multispectral sensor, providing high-resolution and wide-field optical remote sensing imaging capabilities. For satellites equipped with panchromatic and multispectral sensors, conventional geometric processing methods in the past involved separate calibration for the panchromatic sensor and the multispectral sensor. This method produced distinct internal and external calibration parameters in the respective bands, and also resulted in nonlinear geometric misalignments between the panchromatic and multispectral images due to satellite chattering and other factors. To better capitalize on the high spatial resolution of panchromatic imagery and the superior spectral resolution of multispectral imagery, it is necessary to perform registration on the calibrated panchromatic and multispectral images. When registering separately calibrated panchromatic and multispectral images, poor consistency between panchromatic and multispectral images leads to a small number of corresponding points, resulting in poor accuracy and registration effects. To address this issue, we propose a joint panchromatic and multispectral calibration method to register the panchromatic and multispectral images. Before geometric calibration, it is necessary to perform corresponding points matching. When matching, the small interval between the panchromatic and multispectral Charge-Coupled Devices (CCDs) results in a small intersection angle of the corresponding points between the panchromatic and multispectral images. As a result of this, the consistency between the spectral bands significantly improves, and the corresponding points match to have a more uniform distribution and a wider coverage. The technique enhances the consistent registration accuracy of both the panchromatic and multispectral bands. Experiments demonstrate that the joint calibration method yields a registration accuracy of panchromatic and multispectral bands exceeding 0.3 pixels.
APA, Harvard, Vancouver, ISO, and other styles
5

Abdolahpoor, Asma, and Peyman Kabiri. "New texture-based pansharpening method using wavelet packet transform and PCA." International Journal of Wavelets, Multiresolution and Information Processing 18, no. 04 (May 7, 2020): 2050025. http://dx.doi.org/10.1142/s0219691320500253.

Full text
Abstract:
Image fusion is an important concept in remote sensing. Earth observation satellites provide both high-resolution panchromatic and low-resolution multispectral images. Pansharpening is aimed on fusion of a low-resolution multispectral image with a high-resolution panchromatic image. Because of this fusion, a multispectral image with high spatial and spectral resolution is generated. This paper reports a new method to improve spatial resolution of the final multispectral image. The reported work proposes an image fusion method using wavelet packet transform (WPT) and principal component analysis (PCA) methods based on the textures of the panchromatic image. Initially, adaptive PCA (APCA) is applied to both multispectral and panchromatic images. Consequently, WPT is used to decompose the first principal component of multispectral and panchromatic images. Using WPT, high frequency details of both panchromatic and multispectral images are extracted. In areas with similar texture, extracted spatial details from the panchromatic image are injected into the multispectral image. Experimental results show that the proposed method can provide promising results in fusing multispectral images with high-spatial resolution panchromatic image. Moreover, results show that the proposed method can successfully improve spectral features of the multispectral image.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, L., and W. S. El-Deiry. "Multispectral Fluorescence Imaging." Journal of Nuclear Medicine 50, no. 10 (September 16, 2009): 1563–66. http://dx.doi.org/10.2967/jnumed.109.063925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Grant, J., I. J. H. McCrindle, C. Li, and D. R. S. Cumming. "Multispectral metamaterial absorber." Optics Letters 39, no. 5 (February 24, 2014): 1227. http://dx.doi.org/10.1364/ol.39.001227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jia, Jie, Chuan Ni, Andrew Sarangan, and Keigo Hirakawa. "Fourier multispectral imaging." Optics Express 23, no. 17 (August 19, 2015): 22649. http://dx.doi.org/10.1364/oe.23.022649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Armin, Fahimeh, and Hamid Keshmiri. "Multispectral plasmonic supercells." Journal of Optics 20, no. 7 (June 20, 2018): 075003. http://dx.doi.org/10.1088/2040-8986/aaca0c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mouats, Tarek, Nabil Aouf, Angel Domingo Sappa, Cristhian Aguilera, and Ricardo Toledo. "Multispectral Stereo Odometry." IEEE Transactions on Intelligent Transportation Systems 16, no. 3 (June 2015): 1210–24. http://dx.doi.org/10.1109/tits.2014.2354731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bhosle, Udhav, Sumantra Dutta Roy, and Subhasis Chaudhuri. "Multispectral panoramic mosaicing." Pattern Recognition Letters 26, no. 4 (March 2005): 471–82. http://dx.doi.org/10.1016/j.patrec.2004.08.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lysenko, S. A., and M. M. Kugeiko. "Quantitative Multispectral Endoscopy." Measurement Techniques 56, no. 11 (February 2014): 1302–10. http://dx.doi.org/10.1007/s11018-014-0372-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dȩbska, Barbara. "SCANNET - multispectral knowledgebase." Journal of Molecular Structure 348 (March 1995): 131–34. http://dx.doi.org/10.1016/0022-2860(95)08606-v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ntziachristos, Vasilis, and Andreas Buehler. "Multispectral optoacoustic tomography." Scholarpedia 12, no. 3 (2017): 42449. http://dx.doi.org/10.4249/scholarpedia.42449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Orth, Antony, Monica Jo Tomaszewski, Richik N. Ghosh, and Ethan Schonbrun. "Gigapixel multispectral microscopy." Optica 2, no. 7 (July 20, 2015): 654. http://dx.doi.org/10.1364/optica.2.000654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Meng, Ziyi, Mu Qiao, Jiawei Ma, Zhenming Yu, Kun Xu, and Xin Yuan. "Snapshot multispectral endomicroscopy." Optics Letters 45, no. 14 (July 9, 2020): 3897. http://dx.doi.org/10.1364/ol.393213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bialic, Emilie, and Jean-Louis de Bougrenet de la Tocnaye. "Multispectral imaging axicons." Applied Optics 50, no. 20 (July 8, 2011): 3638. http://dx.doi.org/10.1364/ao.50.003638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Pust, Oliver, and Martin Hubold. "Snapshot Multispectral Imaging." PhotonicsViews 16, no. 2 (March 12, 2019): 34–36. http://dx.doi.org/10.1002/phvs.201900019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mosny, Milan, and Brian Funt. "Multispectral Colour Constancy." Color and Imaging Conference 14, no. 1 (January 1, 2006): 309–13. http://dx.doi.org/10.2352/cic.2006.14.1.art00057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kříž, Pavel, and Michal Haindl. "Multispectral Texture Benchmark." Procedia Computer Science 225 (2023): 3143–52. http://dx.doi.org/10.1016/j.procs.2023.10.308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hao, Qun, Yanfeng Song, Jie Cao, Hao Liu, Qianghui Liu, Jie Li, Qiang Luo, Yang Cheng, Huan Cui, and Lin Liu. "The Development of Snapshot Multispectral Imaging Technology Based on Artificial Compound Eyes." Electronics 12, no. 4 (February 6, 2023): 812. http://dx.doi.org/10.3390/electronics12040812.

Full text
Abstract:
In the present study, the advantages of multispectral imaging over hyperspectral imaging in real-time spectral imaging are briefly analyzed, and the advantages and disadvantages of snapshot spectral imaging and other spectral imaging technologies are briefly described. The technical characteristics of artificial compound eyes and multi-aperture imaging and the research significance of snapshot artificial compound eye multispectral imaging are also introduced. The classification and working principle of the snapshot artificial compound eye multispectral imaging system are briefly described. According to the realization method of the optical imaging system, the ACE snapshot multi-aperture multispectral imaging system is divided into plane and curved types. In the planar compound eye spectral imaging system, the technical progress of the multispectral imaging system based on the thin observation module by bound optics (TOMBO) architecture and the multispectral imaging system based on the linear variable spectral filter are introduced. At the same time, three curved multispectral imaging systems are introduced. Snapshot artificial compound eye multispectral imaging technology is also briefly analyzed and compared. The research results are helpful to comprehensively understand the research status of snapshot multispectral multi-aperture imaging technology based on artificial compound eyes and to lay the foundation for improving its comprehensive performance even further.
APA, Harvard, Vancouver, ISO, and other styles
22

Vuletić, Jelena, Marsela Car, and Matko Orsag. "Close-range multispectral imaging with Multispectral-Depth (MS-D) system." Biosystems Engineering 231 (July 2023): 178–94. http://dx.doi.org/10.1016/j.biosystemseng.2023.06.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zdravcheva, Neli. "INFORMATION EXTRACTION FROM MULTISPECTRAL SATELLITE IMAGES." Journal Scientific and Applied Research 24, no. 1 (November 23, 2023): 25–31. http://dx.doi.org/10.46687/jsar.v24i1.364.

Full text
Abstract:
The article analyzes various methods and approaches of modern remote sensing that can be used in the processing of multispectral satellite images in order to effectively extract visual information about territories for which preliminary data is not available. Attention is paid to the creation of new derivative images (synthesized and indexed) and to performing pixel-oriented computer non supervised classification. A series of experiments have been made that clearly reveal the advantages and conveniences of remote retrieval of information from multispectral satellite images in a territory for which reference objects and other data acquired in situ are not available.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Jianwei, and Yan Zhao. "Lensless Multispectral Camera Based on a Coded Aperture Array." Sensors 21, no. 22 (November 22, 2021): 7757. http://dx.doi.org/10.3390/s21227757.

Full text
Abstract:
Multispectral imaging can be applied to water quality monitoring, medical diagnosis, and other applications, but the principle of multispectral imaging is different from the principle of hyper-spectral imaging. Multispectral imaging is generally achieved through filters, so multiple photos are required to obtain spectral information. Using multiple detectors to take pictures at the same time increases the complexity and cost of the system. This paper proposes a simple multispectral camera based on lensless imaging, which does not require multiple lenses. The core of the system is the multispectral coding aperture. The coding aperture is divided into different regions and each region transmits the light of one wavelength, such that the spectral information of the target can be coded. By solving the inverse problem of sparse constraints, the multispectral information of the target is inverted. Herein, we analyzed the characteristics of this multispectral camera and developed a principle prototype to obtain experimental results.
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Taeheon, Youngjoon Yu, and Yong Man Ro. "Multispectral Invisible Coating: Laminated Visible-Thermal Physical Attack against Multispectral Object Detectors Using Transparent Low-E Films." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (June 26, 2023): 1151–59. http://dx.doi.org/10.1609/aaai.v37i1.25197.

Full text
Abstract:
Multispectral object detection plays a vital role in safety-critical vision systems that require an around-the-clock operation and encounter dynamic real-world situations(e.g., self-driving cars and autonomous surveillance systems). Despite its crucial competence in safety-related applications, its security against physical attacks is severely understudied. We investigate the vulnerability of multispectral detectors against physical attacks by proposing a new physical method: Multispectral Invisible Coating. Utilizing transparent Low-e films, we realize a laminated visible-thermal physical attack by attaching Low-e films over a visible attack printing. Moreover, we apply our physical method to manufacture a Multispectral Invisible Suit that hides persons from the multiple view angles of Multispectral detectors. To simulate our attack under various surveillance scenes, we constructed a large-scale multispectral pedestrian dataset which we will release in public. Extensive experiments show that our proposed method effectively attacks the state-of-the-art multispectral detector both in the digital space and the physical world.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Yanchao, Wen Yang, Ying Sun, Christine Chang, Jiya Yu, and Wenbo Zhang. "Fusion of Multispectral Aerial Imagery and Vegetation Indices for Machine Learning-Based Ground Classification." Remote Sensing 13, no. 8 (April 7, 2021): 1411. http://dx.doi.org/10.3390/rs13081411.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) are emerging and promising platforms for carrying different types of cameras for remote sensing. The application of multispectral vegetation indices for ground cover classification has been widely adopted and has proved its reliability. However, the fusion of spectral bands and vegetation indices for machine learning-based land surface investigation has hardly been studied. In this paper, we studied the fusion of spectral bands information from UAV multispectral images and derived vegetation indices for almond plantation classification using several machine learning methods. We acquired multispectral images over an almond plantation using a UAV. First, a multispectral orthoimage was generated from the acquired multispectral images using SfM (Structure from Motion) photogrammetry methods. Eleven types of vegetation indexes were proposed based on the multispectral orthoimage. Then, 593 data points that contained multispectral bands and vegetation indexes were randomly collected and prepared for this study. After comparing six machine learning algorithms (Support Vector Machine, K-Nearest Neighbor, Linear Discrimination Analysis, Decision Tree, Random Forest, and Gradient Boosting), we selected three (SVM, KNN, and LDA) to study the fusion of multi-spectral bands information and derived vegetation index for classification. With the vegetation indexes increased, the model classification accuracy of all three selected machine learning methods gradually increased, then dropped. Our results revealed that that: (1) spectral information from multispectral images can be used for machine learning-based ground classification, and among all methods, SVM had the best performance; (2) combination of multispectral bands and vegetation indexes can improve the classification accuracy comparing to only spectral bands among all three selected methods; (3) among all VIs, NDEGE, NDVIG, and NDVGE had consistent performance in improving classification accuracies, and others may reduce the accuracy. Machine learning methods (SVM, KNN, and LDA) can be used for classifying almond plantation using multispectral orthoimages, and fusion of multispectral bands with vegetation indexes can improve machine learning-based classification accuracy if the vegetation indexes are properly selected.
APA, Harvard, Vancouver, ISO, and other styles
27

Chakravortty, S., and P. Subramaniam. "Fusion of Hyperspectral and Multispectral Image Data for Enhancement of Spectral and Spatial Resolution." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (November 28, 2014): 1099–103. http://dx.doi.org/10.5194/isprsarchives-xl-8-1099-2014.

Full text
Abstract:
Hyperspectral image enhancement has been a concern for the remote sensing society for detailed end member detection. Hyperspectral remote sensor collects images in hundreds of narrow, continuous spectral channels, whereas multispectral remote sensor collects images in relatively broader wavelength bands. However, the spatial resolution of the hyperspectral sensor image is comparatively lower than that of the multispectral. As a result, spectral signatures from different end members originate within a pixel, known as mixed pixels. This paper presents an approach for obtaining an image which has the spatial resolution of the multispectral image and spectral resolution of the hyperspectral image, by fusion of hyperspectral and multispectral image. The proposed methodology also addresses the band remapping problem, which arises due to different regions of spectral coverage by multispectral and hyperspectral images. Therefore we apply algorithms to restore the spatial information of the hyperspectral image by fusing hyperspectral bands with only those bands which come under each multispectral band range. The proposed methodology is applied over Henry Island, of the Sunderban eco-geographic province. The data is collected by the Hyperion hyperspectral sensor and LISS IV multispectral sensor.
APA, Harvard, Vancouver, ISO, and other styles
28

Zou, Xiaoliang, Guihua Zhao, Jonathan Li, Yuanxi Yang, and Yong Fang. "3D LAND COVER CLASSIFICATION BASED ON MULTISPECTRAL LIDAR POINT CLOUDS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 741–47. http://dx.doi.org/10.5194/isprs-archives-xli-b1-741-2016.

Full text
Abstract:
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
APA, Harvard, Vancouver, ISO, and other styles
29

Zou, Xiaoliang, Guihua Zhao, Jonathan Li, Yuanxi Yang, and Yong Fang. "3D LAND COVER CLASSIFICATION BASED ON MULTISPECTRAL LIDAR POINT CLOUDS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 6, 2016): 741–47. http://dx.doi.org/10.5194/isprsarchives-xli-b1-741-2016.

Full text
Abstract:
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
APA, Harvard, Vancouver, ISO, and other styles
30

Matikainen, Leena, Juha Hyyppä, and Paula Litkey. "MULTISPECTRAL AIRBORNE LASER SCANNING FOR AUTOMATED MAP UPDATING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 323–30. http://dx.doi.org/10.5194/isprs-archives-xli-b3-323-2016.

Full text
Abstract:
During the last 20 years, airborne laser scanning (ALS), often combined with multispectral information from aerial images, has shown its high feasibility for automated mapping processes. Recently, the first multispectral airborne laser scanners have been launched, and multispectral information is for the first time directly available for 3D ALS point clouds. This article discusses the potential of this new single-sensor technology in map updating, especially in automated object detection and change detection. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from a random forests analysis suggest that the multispectral intensity information is useful for land cover classification, also when considering ground surface objects and classes, such as roads. An out-of-bag estimate for classification error was about 3% for separating classes asphalt, gravel, rocky areas and low vegetation from each other. For buildings and trees, it was under 1%. According to feature importance analyses, multispectral features based on several channels were more useful that those based on one channel. Automatic change detection utilizing the new multispectral ALS data, an old digital surface model (DSM) and old building vectors was also demonstrated. Overall, our first analyses suggest that the new data are very promising for further increasing the automation level in mapping. The multispectral ALS technology is independent of external illumination conditions, and intensity images produced from the data do not include shadows. These are significant advantages when the development of automated classification and change detection procedures is considered.
APA, Harvard, Vancouver, ISO, and other styles
31

Matikainen, Leena, Juha Hyyppä, and Paula Litkey. "MULTISPECTRAL AIRBORNE LASER SCANNING FOR AUTOMATED MAP UPDATING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 323–30. http://dx.doi.org/10.5194/isprsarchives-xli-b3-323-2016.

Full text
Abstract:
During the last 20 years, airborne laser scanning (ALS), often combined with multispectral information from aerial images, has shown its high feasibility for automated mapping processes. Recently, the first multispectral airborne laser scanners have been launched, and multispectral information is for the first time directly available for 3D ALS point clouds. This article discusses the potential of this new single-sensor technology in map updating, especially in automated object detection and change detection. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from a random forests analysis suggest that the multispectral intensity information is useful for land cover classification, also when considering ground surface objects and classes, such as roads. An out-of-bag estimate for classification error was about 3% for separating classes asphalt, gravel, rocky areas and low vegetation from each other. For buildings and trees, it was under 1%. According to feature importance analyses, multispectral features based on several channels were more useful that those based on one channel. Automatic change detection utilizing the new multispectral ALS data, an old digital surface model (DSM) and old building vectors was also demonstrated. Overall, our first analyses suggest that the new data are very promising for further increasing the automation level in mapping. The multispectral ALS technology is independent of external illumination conditions, and intensity images produced from the data do not include shadows. These are significant advantages when the development of automated classification and change detection procedures is considered.
APA, Harvard, Vancouver, ISO, and other styles
32

Tisserand, Stéphane. "VIS-NIR hyperspectral cameras." Photoniques, no. 110 (October 2021): 58–64. http://dx.doi.org/10.1051/photon/202111058.

Full text
Abstract:
Hyperspectral and multispectral imaging can record a single scene across a range of spectral bands. The resulting three-dimensional dataset is called a "hypercube". A spectrum is available for each point of the image. This makes it possible to analyse, quantify or differentiate the elements and materials constituting the scene. This article presents the existing technologies on the market and their main characteristics in the VIS/NIR spectral domain (400-1000 nm). It then focuses on a specific multispectral technology called snapshot multispectral imaging, combining CMOS sensors and pixelated multispectral filters (filtering at the pixel level).
APA, Harvard, Vancouver, ISO, and other styles
33

Rossi, Lorenzo, Irene Mammi, and Filippo Pelliccia. "UAV-Derived Multispectral Bathymetry." Remote Sensing 12, no. 23 (November 27, 2020): 3897. http://dx.doi.org/10.3390/rs12233897.

Full text
Abstract:
Bathymetry is considered an important component in marine applications as several coastal erosion monitoring and engineering projects are carried out in this field. It is traditionally acquired via shipboard echo sounding, but nowadays, multispectral satellite imagery is also commonly applied using different remote sensing-based algorithms. Satellite-Derived Bathymetry (SDB) relates the surface reflectance of shallow coastal waters to the depth of the water column. The present study shows the results of the application of Stumpf and Lyzenga algorithms to derive the bathymetry for a small area using an Unmanned Aerial Vehicle (UAV), also known as a drone, equipped with a multispectral camera acquiring images in the same WorldView-2 satellite sensor spectral bands. A hydrographic Multibeam Echosounder survey was performed in the same period in order to validate the method’s results and accuracy. The study area was approximately 0.5 km2 and located in Tuscany (Italy). Because of the high percentage of water in the images, a new methodology was also implemented for producing a georeferenced orthophoto mosaic. UAV multispectral images were processed to retrieve bathymetric data for testing different band combinations and evaluating the accuracy as a function of the density and quantity of sea bottom control points. Our results indicate that UAV-Derived Bathymetry (UDB) permits an accuracy of about 20 cm to be obtained in bathymetric mapping in shallow waters, minimizing operative expenses and giving the possibility to program a coastal monitoring surveying activity. The full sea bottom coverage obtained using this methodology permits detailed Digital Elevation Models (DEMs) comparable to a Multibeam Echosounder survey, and can also be applied in very shallow waters, where the traditional hydrographic approach requires hard fieldwork and presents operational limits.
APA, Harvard, Vancouver, ISO, and other styles
34

Wójcik, Waldemar, Vladimir Firago, Andrzej Smolarz, Indira Shedreyeva, and Bakhyt Yeraliyeva. "Multispectral High Temperature Thermography." Sensors 22, no. 3 (January 19, 2022): 742. http://dx.doi.org/10.3390/s22030742.

Full text
Abstract:
The paper considers the issues of creating high-temperature digital thermographs based on RGB photodetector arrays. It has been shown that increasing the reliability of temperature measurement of bodies with unknown spectral coefficient of thermal radiation can be ensured by optimal selection of the used spectral range and registration of the observed thermal radiation fields in three spectral ranges. The registration of thermal radiation in four or more spectral ranges was found to be inefficient due to the increasing error in temperature determination. This paper presents a method for forming three overlapping spectral regions in the NIR spectral range, which is based on the use of an external spectral filter and a combination of the spectral characteristics of an RGB photodetector array. It is shown that it is necessary to ensure the stability of the solution of the system of three nonlinear equations with respect to the influence of noise. For this purpose, the use of a priori information about the slope factor of the spectral dependence of the thermal radiation coefficient in the selected spectral range for the controlled bodies is proposed. The theoretical results are confirmed by examples of their application in a thermograph based on an array of CMOS RGB photodetectors.
APA, Harvard, Vancouver, ISO, and other styles
35

Lebedev, O. A., V. E. Sabinin, and S. V. Solk. "A large multispectral objective." Journal of Optical Technology 78, no. 11 (November 30, 2011): 709. http://dx.doi.org/10.1364/jot.78.000709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Pal, Himadri, and Mark Neifeld. "Multispectral principal component imaging." Optics Express 11, no. 18 (September 8, 2003): 2118. http://dx.doi.org/10.1364/oe.11.002118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Norris, David J. "Multispectral quantum-dot photodetectors." Nature Photonics 13, no. 4 (March 22, 2019): 230–32. http://dx.doi.org/10.1038/s41566-019-0401-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

EMORI, YASUFUMI. "Multispectral linear array system." Journal of the Japan society of photogrammetry and remote sensing 26, no. 2 (1987): 14–24. http://dx.doi.org/10.4287/jsprs.26.2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Zhenyue, Xia Wang, and Rongguang Liang. "RGB-NIR multispectral camera." Optics Express 22, no. 5 (February 24, 2014): 4985. http://dx.doi.org/10.1364/oe.22.004985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Chandra, Sayan, Daniel Franklin, Jared Cozart, Alireza Safaei, and Debashis Chanda. "Adaptive Multispectral Infrared Camouflage." ACS Photonics 5, no. 11 (September 12, 2018): 4513–19. http://dx.doi.org/10.1021/acsphotonics.8b00972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bist, K. S., S. C. Jain, and Vinod Kumar. "CCD Based Multispectral Scanner." IETE Technical Review 3, no. 5 (May 1986): 220–23. http://dx.doi.org/10.1080/02564602.1986.11437953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ashe, Philip R. "Miniature snapshot multispectral imager." Optical Engineering 50, no. 3 (March 1, 2011): 033203. http://dx.doi.org/10.1117/1.3552665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Drew, Mark S., and Graham D. Finlayson. "Multispectral processing without spectra." Journal of the Optical Society of America A 20, no. 7 (July 1, 2003): 1181. http://dx.doi.org/10.1364/josaa.20.001181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

McCarthy, Annemarie, Killian Barton, and Liam Lewis. "Low-Cost Multispectral Imager." Journal of Chemical Education 97, no. 10 (August 31, 2020): 3892–98. http://dx.doi.org/10.1021/acs.jchemed.0c00407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Pelagotti, Anna, Andrea Mastio, Alessia Rosa, and Alessandro Piva. "Multispectral imaging of paintings." IEEE Signal Processing Magazine 25, no. 4 (July 2008): 27–36. http://dx.doi.org/10.1109/msp.2008.923095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

LeGendre, Chloe, Xueming Yu, Dai Liu, Jay Busch, Andrew Jones, Sumanta Pattanaik, and Paul Debevec. "Practical multispectral lighting reproduction." ACM Transactions on Graphics 35, no. 4 (July 11, 2016): 1–11. http://dx.doi.org/10.1145/2897824.2925934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bennett, Eric P., John L. Mason, and Leonard McMillan. "Multispectral Bilateral Video Fusion." IEEE Transactions on Image Processing 16, no. 5 (May 2007): 1185–94. http://dx.doi.org/10.1109/tip.2007.894236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Aguilera, Cristhian, Fernando Barrera, Felipe Lumbreras, Angel D. Sappa, and Ricardo Toledo. "Multispectral Image Feature Points." Sensors 12, no. 9 (September 17, 2012): 12661–72. http://dx.doi.org/10.3390/s120912661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Srinuanjan, Keerayoot, Masaki Obara, and Kyu Yoshimori. "Multispectral hyperbolic incoherent holography." Optical Review 25, no. 1 (December 21, 2017): 65–77. http://dx.doi.org/10.1007/s10043-017-0397-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cohen, Sarah, Alex M. Valm, and Jennifer Lippincott-Schwartz. "Multispectral Live-Cell Imaging." Current Protocols in Cell Biology 79, no. 1 (May 14, 2018): e46. http://dx.doi.org/10.1002/cpcb.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography