To see the other types of publications on this topic, follow the link: Sentinel-2 multispectral imagery.

Journal articles on the topic 'Sentinel-2 multispectral imagery'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sentinel-2 multispectral imagery.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Krauklit, G., and K. Aghayeva. "METHANE DETECTION BASED ON SENTINEL-2 MULTISPECTRAL IMAGERY." Sciences of Europe, no. 110 (February 7, 2023): 77–81. https://doi.org/10.5281/zenodo.7618453.

Full text
Abstract:
Methane is a powerful greenhouse gas that lacks both odour and colour and has a significant impact on climate change. Methane's contribution to global warming is about 25% of all warming observed since pre-industrial times. Anthropogenic methane emissions come from many different sources, mostly related to agricultural activities, coal mining, oil and gas extraction, and waste treatment. Studies in this area have shown that some of these sources emit significant amounts of methane due to equipment failures or abnormal operating conditions. This article describes how the use of the Sentinel-2 MultiSpectral Instrument (MSI) can help effectively mitigate climate change by enabling rapid detection and frequent monitoring of strong methane sources for subsequent remediation with fine resolution (20 m) and rapid imaging frequency (2-5 days).
APA, Harvard, Vancouver, ISO, and other styles
2

Krauß, T. "EXTRACTION OF CLOUD HEIGHTS FROM SENTINEL-2 MULTISPECTRAL IMAGES." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2021 (June 17, 2021): 17–23. http://dx.doi.org/10.5194/isprs-annals-v-1-2021-17-2021.

Full text
Abstract:
Abstract. Investigation of the focal plane assembly of the Sentinel-2 satellites show slight delays in the acquisition time of different bands on different CCD lines of about 0.5 to 1 second. This effect was already exploited in the detection of moving objects in very high resolution imagery as from WorldView-2 or -3 and also already for Sentinel-2 imagery. In our study we use the four 10-m-bands 2, 3, 4 and 8 (blue, green, red and near infrared) of Sentinel-2. In the level 1C processing each spectral band gets orthorectified separately on the same digital elevation model. So on the one hand moving objects on the ground experience a shift between the spectral bands. On the other hand objects not on the ground also show a slight shift between the spectral bands depending on the height of the object above ground. In this work we use this second effect. Analysis of cloudy Sentinel-2 scenes show small shifts of only one to two pixels depending on the height of the clouds above ground. So a new method based on algorithms for deriving dense digital elevation models from stereo imagery was developed to derive the cloud heights in Sentinel-2 images from the parallax from the 10-m-bands. After detailed description of the developed method it is applied to different cloudy Sentinel-2 images and the results are cross-checked using the shadows of the clouds together with the position of the sun at acquisition time.
APA, Harvard, Vancouver, ISO, and other styles
3

Rahmadana, Aries Dwi Wahyu. "Pemanfaatan Foto Udara Multispektral Untuk Sidik Cepat Kerapatan Tutupan Vegetasi Di Wilayah Perkotaan." Geomedia Majalah Ilmiah dan Informasi Kegeografian 21, no. 1 (2023): 1–9. http://dx.doi.org/10.21831/gm.v21i1.35907.

Full text
Abstract:
Current technology enables NDVI analysis of multispectral aerial photographs, providing fast, detailed actual vegetation cover information. However, no study shows the best threshold value for mapping vegetation cover in urban areas. This study aims 1) to compare the area of vegetation cover on a semi-detailed and detailed scale, and 2) to determine the best NDVI threshold for mapping vegetation cover in urban areas. This study uses multispectral aerial photographs and Sentinel-2 satellite imagery (positioned as the baseline). Aerial imagery and photographs are processed for visible, red, and NIR bands. Band image processing and aerial photographs were used for NDVI analysis. The results show that the best threshold is at NDVI 0.2. The results of the identification of vegetation cover with Sentinel-2 imagery showed an area of 6.37 hectares or 28,2%, while the aerial photos resulted in an area of 11.4 hectares or 50,5%. The difference in area between the two spatial data is due to differences in the resolution of Sentinel-2 imagery with aerial photographs, 4 cm, and 10 m.
APA, Harvard, Vancouver, ISO, and other styles
4

Taghadosi, Mohammad Mahdi, Mahdi Hasanlou, and Kamran Eftekhari. "Retrieval of soil salinity from Sentinel-2 multispectral imagery." European Journal of Remote Sensing 52, no. 1 (2019): 138–54. http://dx.doi.org/10.1080/22797254.2019.1571870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hartmann, David, Mathieu Gravey, Timothy David Price, Wiebe Nijland, and Steven Michael de Jong. "Surveying Nearshore Bathymetry Using Multispectral and Hyperspectral Satellite Imagery and Machine Learning." Remote Sensing 17, no. 2 (2025): 291. https://doi.org/10.3390/rs17020291.

Full text
Abstract:
Nearshore bathymetric data are essential for assessing coastal hazards, studying benthic habitats and for coastal engineering. Traditional bathymetry mapping techniques of ship-sounding and airborne LiDAR are laborious, expensive and not always efficient. Multispectral and hyperspectral remote sensing, in combination with machine learning techniques, are gaining interest. Here, the nearshore bathymetry of southwest Puerto Rico is estimated with multispectral Sentinel-2 and hyperspectral PRISMA imagery using conventional spectral band ratio models and more advanced XGBoost models and convolutional neural networks. The U-Net, trained on 49 Sentinel-2 images, and the 2D-3D CNN, trained on PRISMA imagery, had a Mean Absolute Error (MAE) of approximately 1 m for depths up to 20 m and were superior to band ratio models by ~40%. Problems with underprediction remain for turbid waters. Sentinel-2 showed higher performance than PRISMA up to 20 m (~18% lower MAE), attributed to training with a larger number of images and employing an ensemble prediction, while PRISMA outperformed Sentinel-2 for depths between 25 m and 30 m (~19% lower MAE). Sentinel-2 imagery is recommended over PRISMA imagery for estimating shallow bathymetry given its similar performance, much higher image availability and easier handling. Future studies are recommended to train neural networks with images from various regions to increase generalization and method portability. Models are preferably trained by area-segregated splits to ensure independence between the training and testing set. Using a random train test split for bathymetry is not recommended due to spatial autocorrelation of sea depth, resulting in data leakage. This study demonstrates the high potential of machine learning models for assessing the bathymetry of optically shallow waters using optical satellite imagery.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Minhui, Redmond R. Shamshiri, Cornelia Weltzien, and Michael Schirrmann. "Crop Monitoring Using Sentinel-2 and UAV Multispectral Imagery: A Comparison Case Study in Northeastern Germany." Remote Sensing 14, no. 17 (2022): 4426. http://dx.doi.org/10.3390/rs14174426.

Full text
Abstract:
Monitoring within-field crop variability at fine spatial and temporal resolution can assist farmers in making reliable decisions during their agricultural management; however, it traditionally involves a labor-intensive and time-consuming pointwise manual process. To the best of our knowledge, few studies conducted a comparison of Sentinel-2 with UAV data for crop monitoring in the context of precision agriculture. Therefore, prospects of crop monitoring for characterizing biophysical plant parameters and leaf nitrogen of wheat and barley crops were evaluated from a more practical viewpoint closer to agricultural routines. Multispectral UAV and Sentinel-2 imagery was collected over three dates in the season and compared with reference data collected at 20 sample points for plant leaf nitrogen (N), maximum plant height, mean plant height, leaf area index (LAI), and fresh biomass. Higher correlations of UAV data to the agronomic parameters were found on average than with Sentinel-2 data with a percentage increase of 6.3% for wheat and 22.2% for barley. In this regard, VIs calculated from spectral bands in the visible part performed worse for Sentinel-2 than for the UAV data. In addition, large-scale patterns, formed by the influence of an old riverbed on plant growth, were recognizable even in the Sentinel-2 imagery despite its much lower spatial resolution. Interestingly, also smaller features, such as the tramlines from controlled traffic farming (CTF), had an influence on the Sentinel-2 data and showed a systematic pattern that affected even semivariogram calculation. In conclusion, Sentinel-2 imagery is able to capture the same large-scale pattern as can be derived from the higher detailed UAV imagery; however, it is at the same time influenced by management-driven features such as tramlines, which cannot be accurately georeferenced. In consequence, agronomic parameters were better correlated with UAV than with Sentinel-2 data. Crop growers as well as data providers from remote sensing services may take advantage of this knowledge and we recommend the use of UAV data as it gives additional information about management-driven features. For future perspective, we would advise fusing UAV with Sentinel-2 imagery taken early in the season as it can integrate the effect of agricultural management in the subsequent absence of high spatial resolution data to help improve crop monitoring for the farmer and to reduce costs.
APA, Harvard, Vancouver, ISO, and other styles
7

Raharja, Bayu, Agung Setianto, and Anastasia Dewi Titisari. "Comparison of Different Multispectral Images to Map Hydrothermal Alteration Zones in Kokap, Kulon Progo." Journal of Applied Geology 6, no. 2 (2021): 86. http://dx.doi.org/10.22146/jag.60699.

Full text
Abstract:
Using remote sensing data for hydrothermal alteration mapping beside saving time and reducing cost leads to increased accuracy. In this study, the result of multispectral remote sensing tehcniques has been compare for manifesting hydrothermal alteration in Kokap, Kulon Progo. Three multispectral images, including ASTER, Landsat 8, and Sentinel-2, were compared in order to find the highest overall accuracy using principle component analysis (PCA) and directed component analysis (DPC). Several subsets band combinations were used as PCA and DPC input to targeting the key mineral of alteration. Multispectral classification with the maximum likelihood algorithm was performed to map the alteration types based on training and testing data and followed by accuracy evaluation. Two alteration zones were succeeded to be mapped: argillic zone and propylitic zone. Results of these image classification techniques were compared with known alteration zones from previous study. DPC combination of band ratio images of 5:2 and 6:7 of Landsat 8 imagery yielded a classification accuracy of 56.4%, which was 5.05% and 10.13% higher than those of the ASTER and Sentinel-2 imagery. The used of DEM together with multispectral images was increase the accuracy of hydrothermal alteration mapping in the study area.
APA, Harvard, Vancouver, ISO, and other styles
8

Liao, Yao, Yun Liu, Juan Yang, et al. "A Comparative Study Between Gaofen-1 WFV and Sentinel MSI Imagery for Fire Severity Assessment in a Karst Region, China." Forests 16, no. 4 (2025): 597. https://doi.org/10.3390/f16040597.

Full text
Abstract:
Wild fires frequently influence fragile karst forest ecosystems in southwestern China. We evaluated the potential of Gaofen Wide Field of View (WFV) imagery for assessing the fire severity of karst forest fires. Comparison with Sentinel Multispectral Imager (MSI) imagery was conducted using 19 spectral indices. The highest correlation for Sentinel-2 MSI is 0.634, while for Gaofen-1 WFV it is 0.583. This is not a significant difference. The burned area index, differenced burned area index, and relative differenced modified soil adjusted vegetation index were the highest performing indices for the Gaofen-1 WFV, while the normalized burn ratio plus, differenced normalized differential vegetation index, and relative differenced normalized differential vegetation index were the best for the Sentinel MSI. The total accuracy evaluation of the fire severity assessment for Gaofen-1 WFV ranged from 40 to 44% and that for Sentinel MSI ranged from 40 to 48%. The difference in accuracy between the two satellites was less than 10%. The RMSE values for all six models were close to 0.6, ranging from 0.58 to 0.67. The fire severity maps derived from both imagery sources exhibited overall similar spatial patterns, but the Sentinel-2 MSI maps are obviously finer. These maps matched well with the unmanned aerial vehicle (UAV) images, particularly at high and unburned severity levels. The results of this study revealed that the performance of the Gaofen WFV imagery was close to that of Sentinel MSI imagery which makes it an effective data source for fire severity assessment in this region.
APA, Harvard, Vancouver, ISO, and other styles
9

Марюшко, Максим В’ячеславович, Руслан Едуардович Пащенко та Наталія Сергіївна Коблюк. "МОНІТОРИНГ СІЛЬСЬКОГОСПОДАРСЬКИХ КУЛЬТУР ІЗ ЗАСТОСУВАННЯМ КОСМІЧНИХ ЗНІМКІВ SENTINEL-2". RADIOELECTRONIC AND COMPUTER SYSTEMS, № 1 (23 березня 2019): 99–108. http://dx.doi.org/10.32620/reks.2019.1.11.

Full text
Abstract:
The subject of the study in the article is the growing need for the use of spatial information for efficient agricultural production, due to the growing tendency of Earth remote sensing data accessibility, which, due to the spatial and temporal resolution improvement, can be used in the land cover analysis and other related jobs. The goal is to review the obtaining process of satellite multispectral space imagery from Sentinel-2 and to consider the possibility of their use for monitoring crops during the entire vegetation phase. The tasks: to study the modern needs of agricultural producers in the field of analysis of land cover occupied by agricultural crops; the analysis of the European Space Agency programs and the global land program Copernicus, which uses spatial information from Sentinel-2 for use in the agricultural sector; estimation of the constellation characteristics of Sentinel-2, imaging equipment and remote sensing data processing results by ground services received from Internet services; the use of Sentinel-2 multispectral space imagery for monitoring crops during the entire vegetation phase. The following results were obtained. After analyzing agricultural producers needs and the European Space Agency program, the feasibility of using multispectral space images taken by the Multispectral Instrument installed on satellites Sentinel-2 was established. Free access to the space imagery database is provided through the Copernicus Open Access Hub Internet Service. For the researched territory, Poltava region, Chutov district, the village of Vilkhovatka, various time space images were obtained and the normalized difference vegetation index (NDVI) was calculated. Histogram analysis of the obtained vegetation index values distribution within a single field (corn to grain) allows to reveal a quantitative and qualitative change in biomass, indicating a change in the vegetative phase. Conclusions. The approach described in this paper allows to conduct monitoring of the cropping state during the vegetation phase using both qualitative – visual analysis and quantitative – NDVI index, criteria. The change in the values of the normalized difference vegetation index can reveal a change in the biomass state. However, for calculating the NDVI index, data from near-infrared and red channels is needed, which complicates the acquisition of the original image. Therefore, in order to obtain the quantitative criteria in subsequent jobs, it is expedient to consider the possibility of using fractal dimension, which will reduce the amount of input data required for calculations.
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Bin, Yongyang Xu, Xiao Huang, et al. "Improving Urban Land Cover Classification with Combined Use of Sentinel-2 and Sentinel-1 Imagery." ISPRS International Journal of Geo-Information 10, no. 8 (2021): 533. http://dx.doi.org/10.3390/ijgi10080533.

Full text
Abstract:
Accurate land cover mapping is important for urban planning and management. Remote sensing data have been widely applied for urban land cover mapping. However, obtaining land cover classification via optical remote sensing data alone is difficult due to spectral confusion. To reduce the confusion between dark impervious surface and water, the Sentinel-1A Synthetic Aperture Rader (SAR) data are synergistically combined with the Sentinel-2B Multispectral Instrument (MSI) data. The novel support vector machine with composite kernels (SVM-CK) approach, which can exploit the spatial information, is proposed to process the combination of Sentinel-2B MSI and Sentinel-1A SAR data. The classification based on the fusion of Sentinel-2B and Sentinel-1A data yields an overall accuracy (OA) of 92.12% with a kappa coefficient (KA) of 0.89, superior to the classification results using Sentinel-2B MSI imagery and Sentinel-1A SAR imagery separately. The results indicate that the inclusion of Sentinel-1A SAR data to Sentinel-2B MSI data can improve the classification performance by reducing the confusion between built-up area and water. This study shows that the land cover classification can be improved by fusing Sentinel-2B and Sentinel-1A imagery.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Jingzong, Shijie Cong, Gen Zhang, Yongjun Ma, Yi Zhang, and Jianping Huang. "Detecting Pest-Infested Forest Damage through Multispectral Satellite Imagery and Improved UNet++." Sensors 22, no. 19 (2022): 7440. http://dx.doi.org/10.3390/s22197440.

Full text
Abstract:
Plant pests are the primary biological threats to agricultural and forestry production as well as forest ecosystem. Monitoring forest-pest damage via satellite images is crucial for the development of prevention and control strategies. Previous studies utilizing deep learning to monitor pest-infested damage in satellite imagery adopted RGB images, while multispectral imagery and vegetation indices were not used. Multispectral images and vegetation indices contain a wealth of useful information for detecting plant health, which can improve the precision of pest damage detection. The aim of the study is to further improve forest-pest infestation area segmentation by combining multispectral, vegetation indices and RGB information into deep learning. We also propose a new image segmentation method based on UNet++ with attention mechanism module for detecting forest damage induced by bark beetle and aspen leaf miner in Sentinel-2 images. The ResNeSt101 is used as the feature extraction backbone, and the attention mechanism scSE module is introduced in the decoding phase for improving the image segmentation results. We used Sentinel-2 imagery to produce a dataset based on forest health damage data gathered by the Ministry of Forests, Lands, Natural Resource Operations and Rural Development (FLNRORD) in British Columbia (BC), Canada, during aerial overview surveys (AOS) in 2020. The dataset contains the 11 original Sentinel-2 bands and 13 vegetation indices. The experimental results confirmed that the significance of vegetation indices and multispectral data in enhancing the segmentation effect. The results demonstrated that the proposed method exhibits better segmentation quality and more accurate quantitative indices with overall accuracy of 85.11%, in comparison with the state-of-the-art pest area segmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Xun, Lan, Jiahua Zhang, Dan Cao, Shanshan Yang, and Fengmei Yao. "A novel cotton mapping index combining Sentinel-1 SAR and Sentinel-2 multispectral imagery." ISPRS Journal of Photogrammetry and Remote Sensing 181 (November 2021): 148–66. http://dx.doi.org/10.1016/j.isprsjprs.2021.08.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bie, Wanjuan, Teng Fei, Xinyu Liu, Huizeng Liu, and Guofeng Wu. "Small water bodies mapped from Sentinel-2 MSI (MultiSpectral Imager) imagery with higher accuracy." International Journal of Remote Sensing 41, no. 20 (2020): 7912–30. http://dx.doi.org/10.1080/01431161.2020.1766150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Mzid, Nada, Olfa Boussadia, Rossella Albrizio, Anna Maria Stellacci, Mohamed Braham, and Mladen Todorovic. "Salinity Properties Retrieval from Sentinel-2 Satellite Data and Machine Learning Algorithms." Agronomy 13, no. 3 (2023): 716. http://dx.doi.org/10.3390/agronomy13030716.

Full text
Abstract:
The accurate monitoring of soil salinization plays a key role in the ecological security and sustainable agricultural development of semiarid regions. The objective of this study was to achieve the best estimation of electrical conductivity variables from salt-affected soils in a south Mediterranean region using Sentinel-2 multispectral imagery. In order to realize this goal, a test was carried out using electrical conductivity (EC) data collected in central Tunisia. Soil electrical conductivity and leaf electrical conductivity were measured in an olive orchard over two growing seasons and under three irrigation treatments. Firstly, selected spectral salinity, chlorophyll, water, and vegetation indices were tested over the experimental area to estimate both soil and leaf EC using Sentinel-2 imagery on the Google Earth Engine platform. Subsequently, estimation models of soil and leaf EC were calibrated by employing machine learning (ML) techniques using 12 spectral bands of Sentinel-2 images. The prediction accuracy of the EC estimation was assessed by using k-fold cross-validation and computing statistical metrics. The results of the study revealed that machine learning algorithms, together with multispectral data, could advance the mapping and monitoring of soil and leaf electrical conductivity.
APA, Harvard, Vancouver, ISO, and other styles
15

Xu, Rudong, Jin Liu, and Jianhui Xu. "Extraction of High-Precision Urban Impervious Surfaces from Sentinel-2 Multispectral Imagery via Modified Linear Spectral Mixture Analysis." Sensors 18, no. 9 (2018): 2873. http://dx.doi.org/10.3390/s18092873.

Full text
Abstract:
This study explores the performance of Sentinel-2A Multispectral Instrument (MSI) imagery for extracting urban impervious surface using a modified linear spectral mixture analysis (MLSMA) method. Sentinel-2A MSI provided 10 m red, green, blue, and near-infrared spectral bands, and 20 m shortwave infrared spectral bands, which were used to extract impervious surfaces. We aimed to extract urban impervious surfaces at a spatial resolution of 10 m in the main urban area of Guangzhou, China. In MLSMA, a built-up image was first extracted from the normalized difference built-up index (NDBI) using the Otsu’s method; the high-albedo, low-albedo, vegetation, and soil fractions were then estimated using conventional linear spectral mixture analysis (LSMA). The LSMA results were post-processed to extract high-precision impervious surface, vegetation, and soil fractions by integrating the built-up image and the normalized difference vegetation index (NDVI). The performance of MLSMA was evaluated using Landsat 8 Operational Land Imager (OLI) imagery. Experimental results revealed that MLSMA can extract the high-precision impervious surface fraction at 10 m with Sentinel-2A imagery. The 10 m impervious surface map of Sentinel-2A is capable of recovering more detail than the 30 m map of Landsat 8. In the Sentinel-2A impervious surface map, continuous roads and the boundaries of buildings in urban environments were clearly identified.
APA, Harvard, Vancouver, ISO, and other styles
16

Pla, Magda, Gerard Bota, Andrea Duane, et al. "Calibrating Sentinel-2 Imagery with Multispectral UAV Derived Information to Quantify Damages in Mediterranean Rice Crops Caused by Western Swamphen (Porphyrio porphyrio)." Drones 3, no. 2 (2019): 45. http://dx.doi.org/10.3390/drones3020045.

Full text
Abstract:
Making agricultural production compatible with the conservation of biological diversity is a priority in areas in which human–wildlife conflicts arise. The threatened Western Swamphen (Porphyrio porphyrio) feeds on rice, inducing crop damage and leading to decreases in rice production. Due to the Swamphen protection status, economic compensation policies have been put in place to compensate farmers for these damages, thus requiring an accurate, quantitative, and cost-effective evaluation of rice crop losses over large territories. We used information captured from a UAV (Unmanned Aerial Vehicle) equipped with a multispectral Parrot SEQUOIA camera as ground-truth information to calibrate Sentinel-2 imagery to quantify damages in the region of Ebro Delta, western Mediterranean. UAV vegetation index NDVI (Normalized Difference Vegetation Index) allowed estimation of damages in rice crops at 10 cm pixel resolution by discriminating no-green vegetation pixels. Once co-registered with Sentinel grid, we predicted the UAV damage proportion at a 10 m resolution as a function of Sentinel-2 NDVI, and then we extrapolated the fitted model to the whole Sentinel-2 Ebro Delta image. Finally, the damage predicted with Sentinel-2 data was quantified at the agricultural plot level and validated with field information compiled on the ground by Rangers Service. We found that Sentinel2-NDVI data explained up to 57% of damage reported with UAV. The final validation with Rangers Service data pointed out some limitations in our procedure that leads the way to improving future development. Sentinel2 imagery calibrated with UAV information proved to be a viable and cost-efficient alternative to quantify damages in rice crops at large scales.
APA, Harvard, Vancouver, ISO, and other styles
17

Shahabi, Hejar, Maryam Rahimzad, Sepideh Tavakkoli Piralilou, et al. "Unsupervised Deep Learning for Landslide Detection from Multispectral Sentinel-2 Imagery." Remote Sensing 13, no. 22 (2021): 4698. http://dx.doi.org/10.3390/rs13224698.

Full text
Abstract:
This paper proposes a new approach based on an unsupervised deep learning (DL) model for landslide detection. Recently, supervised DL models using convolutional neural networks (CNN) have been widely studied for landslide detection. Even though these models provide robust performance and reliable results, they depend highly on a large labeled dataset for their training step. As an alternative, in this paper, we developed an unsupervised learning model by employing a convolutional auto-encoder (CAE) to deal with the problem of limited labeled data for training. The CAE was used to learn and extract the abstract and high-level features without using training data. To assess the performance of the proposed approach, we used Sentinel-2 imagery and a digital elevation model (DEM) to map landslides in three different case studies in India, China, and Taiwan. Using minimum noise fraction (MNF) transformation, we reduced the multispectral dimension to three features containing more than 80% of scene information. Next, these features were stacked with slope data and NDVI as inputs to the CAE model. The Huber reconstruction loss was used to evaluate the inputs. We achieved reconstruction losses ranging from 0.10 to 0.147 for the MNF features, slope, and NDVI stack for all three study areas. The mini-batch K-means clustering method was used to cluster the features into two to five classes. To evaluate the impact of deep features on landslide detection, we first clustered a stack of MNF features, slope, and NDVI, then the same ones plus with the deep features. For all cases, clustering based on deep features provided the highest precision, recall, F1-score, and mean intersection over the union in landslide detection.
APA, Harvard, Vancouver, ISO, and other styles
18

Heiselberg, Peder, and Henning Heiselberg. "Ship-Iceberg Discrimination in Sentinel-2 Multispectral Imagery by Supervised Classification." Remote Sensing 9, no. 11 (2017): 1156. http://dx.doi.org/10.3390/rs9111156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sekertekin, A., A. M. Marangoz, and H. Akcin. "PIXEL-BASED CLASSIFICATION ANALYSIS OF LAND USE LAND COVER USING SENTINEL-2 AND LANDSAT-8 DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W6 (November 13, 2017): 91–93. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w6-91-2017.

Full text
Abstract:
The aim of this study is to conduct accuracy analyses of Land Use Land Cover (LULC) classifications derived from Sentinel-2 and Landsat-8 data, and to reveal which dataset present better accuracy results. Zonguldak city and its near surrounding was selected as study area for this case study. Sentinel-2 Multispectral Instrument (MSI) and Landsat-8 the Operational Land Imager (OLI) data, acquired on 6 April 2016 and 3 April 2016 respectively, were utilized as satellite imagery in the study. The RGB and NIR bands of Sentinel-2 and Landsat-8 were used for classification and comparison. Pan-sharpening process was carried out for Landsat-8 data before classification because the spatial resolution of Landsat-8 (30m) is far from Sentinel-2 RGB and NIR bands (10m). LULC images were generated using pixel-based Maximum Likelihood (MLC) supervised classification method. As a result of the accuracy assessment, kappa statistics for Sentinel-2 and Landsat-8 data were 0.78 and 0.85 respectively. The obtained results showed that Sentinel-2 MSI presents more satisfying LULC images than Landsat-8 OLI data. However, in some areas of Sea class Landsat-8 presented better results than Sentinel-2.
APA, Harvard, Vancouver, ISO, and other styles
20

Colaninno, N., A. Marambio, and J. Roca. "TESTING A COMBINED MULTISPECTRAL-MULTITEMPORAL APPROACH FOR GETTING CLOUDLESS IMAGERY FOR SENTINEL-2." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 293–300. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-293-2020.

Full text
Abstract:
Abstract. Earth observation and land cover monitoring are among major applications for satellite data. However, the use of primary satellite information is often limited by clouds, cloud shadows, and haze, which generally contaminate optical imagery. For purposes of hazard assessment, for instance, such as flooding, drought, or seismic events, the availability of uncontaminated optical data is required. Different approaches exist for masking and replacing cloud/haze related contamination. However, most common algorithms take advantage by employing thermal data. Hence, we tested an algorithm suitable for optical imagery only. The approach combines a multispectral-multitemporal strategy to retrieve daytime cloudless and shadow-free imagery. While the approach has been explored for Landsat information, namely Landsat 5 TM and Landsat 8 OLI, here we aim at testing the suitability of the method for Sentinel-2 Multi-Spectral Instrument. A multitemporal stack, for the same image scene, is employed to retrieve a composite uncontaminated image over a temporal period of few months. Besides, in order to emphasize the effectiveness of optical imagery for monitoring post-disaster events, two temporal stages have been processed, before and after a critical seismic event occurred in Lombok Island, Indonesia, in summer 2018. The approach relies on a clouds and cloud shadows masking algorithm, based on spectral features, and a data reconstruction phase based on automatic selection of the most suitable pixels from a multitemporal stack. Results have been tested with uncontaminated image samples for the same scene. High accuracy is achieved.
APA, Harvard, Vancouver, ISO, and other styles
21

Njimi, Houssem, Nesrine Chehata, and Frédéric Revers. "Fusion of Dense Airborne LiDAR and Multispectral Sentinel-2 and Pleiades Satellite Imagery for Mapping Riparian Forest Species Biodiversity at Tree Level." Sensors 24, no. 6 (2024): 1753. http://dx.doi.org/10.3390/s24061753.

Full text
Abstract:
Multispectral and 3D LiDAR remote sensing data sources are valuable tools for characterizing the 3D vegetation structure and thus understanding the relationship between forest structure, biodiversity, and microclimate. This study focuses on mapping riparian forest species in the canopy strata using a fusion of Airborne LiDAR data and multispectral multi-source and multi-resolution satellite imagery: Sentinel-2 and Pleiades at tree level. The idea is to assess the contribution of each data source in the tree species classification at the considered level. The data fusion was processed at the feature level and the decision level. At the feature level, LiDAR 2D attributes were derived and combined with multispectral imagery vegetation indices. At the decision level, LiDAR data were used for 3D tree crown delimitation, providing unique trees or groups of trees. The segmented tree crowns were used as a support for an object-based species classification at tree level. Data augmentation techniques were used to improve the training process, and classification was carried out with a random forest classifier. The workflow was entirely automated using a Python script, which allowed the assessment of four different fusion configurations. The best results were obtained by the fusion of Sentinel-2 time series and LiDAR data with a kappa of 0.66, thanks to red edge-based indices that better discriminate vegetation species and the temporal resolution of Sentinel-2 images that allows monitoring the phenological stages, helping to discriminate the species.
APA, Harvard, Vancouver, ISO, and other styles
22

P, Sakthivel, and Sumathy V. "Multispectral UAV Imagery based on Normalised Difference Land-vegetation Index in image processing for Internet of Things." SAMRIDDHI : A Journal of Physical Sciences, Engineering and Technology 16, no. 01 (2024): 20–27. http://dx.doi.org/10.18090/samriddhi.v16i01.03.

Full text
Abstract:
In this paper, a normalised difference land-vegetation index (NDLI) from unmanned aerial vehicles (UAVs) with wireless sensor networks is demonstrated significant potential for precision agriculture. Data from a UAV equipped with a multispectral mica sense red edge camera is used as ground truth in this investigation to calibrate Sentinel imagery. By distinguishing no-green plant pixels, UAV-based NDLI enabled crop assessment at (1187x707) image pixel resolution. The reflectance value and NDVI of crops at various stages is calculated using both UAV and Sentinel-2 pictures. In this investigation UAV multispectral mapping technology gave advanced information about the physical characteristics of the studied area and better land feature delineation. According to the results, UAV data produced more accurate reflectance values than Sentinel-2 photography. The accuracy of the vegetation index, on the other hand, is not entirely dependent on the precision of the reflectance. Introduced with a simple Green Measurement Matrix (DGM) compared to NDLI produced from Sentinel-2 images, UAV-derived NDLI shows comparatively low sensitivity to vegetation coverage and is unaffected by environmental conditions. Further, it is used for the development of IoT networks by the means of UAV connected with internet for instant field study.
APA, Harvard, Vancouver, ISO, and other styles
23

Vavassori, Alberto, Daniele Oxoli, Giovanna Venuti, et al. "PRISMA Hyperspectral Satellite Imagery Application to Local Climate Zones Mapping." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1-2024 (May 10, 2024): 643–48. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-2024-643-2024.

Full text
Abstract:
Abstract. The urban heat island effect exacerbates the vulnerability of cities to climate change, emphasizing the need for sustainable urban planning driven by data evidence. In the last decade, the Local Climate Zone (LCZ) model emerged as a key tool for categorizing urban landscapes, aiding in the development of urban temperature mitigation strategies. In this work, the contribution of hyperspectral satellite imagery to LCZ mapping, leveraging the Italian Space Agency (ASI)’s PRISMA satellite, is investigated. Mapping performances are compared with traditional multispectral-based LCZ mapping using Sentinel-2 satellite imagery. The Random Forest algorithm is utilized for LCZ classification, with evaluation conducted through spectral separability analysis and accuracy assessment between PRISMA and Sentinel-2 derived LCZ maps as well as with the benchmark LCZ Generator mapping tool. An initial experiment on the effect of PRISMA image pan-sharpening on LCZ spectral separability is also presented. Results obtained for Milan (Northern Italy) demonstrate the potential of hyperspectral imagery in enhancing LCZ identification compared to multispectral data, with promising improvements in LCZ maps overall accuracy. Finally, air temperature patterns within each LCZ class are explored, qualitatively confirming the influence of urban morphology on thermal comfort.
APA, Harvard, Vancouver, ISO, and other styles
24

Mavraeidopoulos, Athanasios K., Emmanouil Oikonomou, Athanasios Palikaris, and Serafeim Poulos. "A Hybrid Bio-Optical Transformation for Satellite Bathymetry Modeling Using Sentinel-2 Imagery." Remote Sensing 11, no. 23 (2019): 2746. http://dx.doi.org/10.3390/rs11232746.

Full text
Abstract:
The article presents a new hybrid bio-optical transformation (HBT) method for the rapid modelling of bathymetry in coastal areas. The proposed approach exploits free-of-charge multispectral images and their processing by applying limited manpower and resources. The testbed area is a strait between two Greek Islands in the Aegean Sea with many small islets and complex seabed relief. The HBT methodology implements semi-analytical and empirical steps to model sea-water inherent optical properties (IOPs) and apparent optical properties (AOPs) observed by the Sentinel-2A multispectral satellite. The relationships of the calculated IOPs and AOPs are investigated and utilized to classify the study area into sub-regions with similar water optical characteristics, where no environmental observations have previously been collected. The bathymetry model is configured using very few field data (training depths) chosen from existing official nautical charts. The assessment of the HBT indicates the potential for obtaining satellite derived bathymetry with a satisfactory accuracy for depths down to 30 m.
APA, Harvard, Vancouver, ISO, and other styles
25

Matejčíková, Júlia, Dana Vébrová, and Peter Surový. "Comparative Analysis of Machine Learning Techniques and Data Sources for Dead Tree Detection: What Is the Best Way to Go?" Remote Sensing 16, no. 16 (2024): 3086. http://dx.doi.org/10.3390/rs16163086.

Full text
Abstract:
In Central Europe, the extent of bark beetle infestation in spruce stands due to prolonged high temperatures and drought has created large areas of dead trees, which are difficult to monitor by ground surveys. Remote sensing is the only possibility for the assessment of the extent of the dead tree areas. Several options exist for mapping individual dead trees, including different sources and different processing techniques. Satellite images, aerial images, and images from UAVs can be used as sources. Machine and deep learning techniques are included in the processing techniques, although models are often presented without proper realistic validation.This paper compares methods of monitoring dead tree areas using three data sources: multispectral aerial imagery, multispectral PlanetScope satellite imagery, and multispectral Sentinel-2 imagery, as well as two processing methods. The classification methods used are Random Forest (RF) and neural network (NN) in two modalities: pixel- and object-based. In total, 12 combinations are presented. The results were evaluated using two types of reference data: accuracy of model on validation data and accuracy on vector-format semi-automatic classification polygons created by a human evaluator, referred to as real Ground Truth. The aerial imagery was found to have the highest model accuracy, with the CNN model achieving up to 98% with object classification. A higher classification accuracy for satellite imagery was achieved by combining pixel classification and the RF model (87% accuracy for Sentinel-2). For PlanetScope Imagery, the best result was 89%, using a combination of CNN and object-based classifications. A comparison with the Ground Truth showed a decrease in the classification accuracy of the aerial imagery to 89% and the classification accuracy of the satellite imagery to around 70%. In conclusion, aerial imagery is the most effective tool for monitoring bark beetle calamity in terms of precision and accuracy, but satellite imagery has the advantage of fast availability and shorter data processing time, together with larger coverage areas.
APA, Harvard, Vancouver, ISO, and other styles
26

Recanatesi, Fabio, Antonietta De Santis, Lorenzo Gatti, et al. "A Comparative Analysis of Spatial Resolution Sentinel-2 and Pleiades Imagery for Mapping Urban Tree Species." Land 14, no. 1 (2025): 106. https://doi.org/10.3390/land14010106.

Full text
Abstract:
Urbanization poses significant challenges to ecosystems, resources, and human well-being, necessitating sustainable planning. Urban vegetation, particularly trees, provides critical ecosystem services such as carbon sequestration, air quality improvement, and biodiversity conservation. Traditional tree assessments are resource-intensive and time-consuming. Recent advances in remote sensing, especially high-resolution multispectral imagery and object-based image analysis (OBIA), offer efficient alternatives for mapping urban vegetation. This study evaluates and compares the efficacy of Sentinel-2 and Pléiades satellite imagery in classifying tree species within historic urban parks in Rome—Villa Borghese, Villa Ada Savoia, and Villa Doria Pamphilj. Pléiades imagery demonstrated superior classification accuracy, achieving an overall accuracy (OA) of 89% and a Kappa index of 0.84 in Villa Ada Savoia, compared to Sentinel-2’s OA of 66% and Kappa index of 0.47. Specific tree species, such as Pinus pinea (Stone Pine), reached a user accuracy (UA) of 84% with Pléiades versus 53% with Sentinel-2. These insights underscore the potential of integrating high-resolution remote sensing data into urban forestry practices to support sustainable urban management and planning.
APA, Harvard, Vancouver, ISO, and other styles
27

Dumbá Monteiro de Castro, Gabriel, Emerson Ferreira Vilela, Ana Luísa Ribeiro de Faria, Rogério Antônio Silva, and Williams Pinto Marques Ferreira. "New vegetation index for monitoring coffee rust using sentinel-2 multispectral imagery." Coffee Science 18 (2023): 1–13. http://dx.doi.org/10.25186/.v18i.2170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Yingxi, Ming Chen, Xiaotao Xi, and Hua Yang. "Bathymetry Inversion Using Attention-Based Band Optimization Model for Hyperspectral or Multispectral Satellite Imagery." Water 15, no. 18 (2023): 3205. http://dx.doi.org/10.3390/w15183205.

Full text
Abstract:
Satellite-derived bathymetry enables the non-contact derivation of large-scale shallow water depths. Hyperspectral satellite images provide more information than multispectral satellite images, making them theoretically more effective and accurate for bathymetry inversion. This paper focuses on the use of hyperspectral satellite images (PRISMA) for bathymetry inversion and compares the retrieval capabilities of multispectral satellite images (Sentinel-2 and Landsat 9) in the southeastern waters of Molokai Island in the Hawaiian Archipelago and Yinyu Island in the Paracel Archipelago. This paper proposes an attention-based band optimization one-dimensional convolutional neural network model (ABO-CNN) to better utilize the increased spectral information from multispectral and hyperspectral images for bathymetry inversion, and this model is compared with a traditional empirical model (Stumpf model) and two deep learning models (feedforward neural network and one-dimensional convolutional neural network). The results indicate that the ABO-CNN model outperforms the above three models, and the root mean square errors of retrieved bathymetry using the PRISMA images are 1.43 m and 0.73 m in the above two study areas, respectively. In summary, this paper demonstrates that PRISMA hyperspectral imagery has superior bathymetry inversion capabilities compared to multispectral images (Sentinel-2 and Landsat 9), and the proposed deep learning model ABO-CNN is a promising candidate model for satellite-derived bathymetry using hyperspectral imagery. With the increasing availability of ICESat-2 bathymetric data, the use of a combination of the proposed ABO-CNN model and the ICEsat-2 data as the training data provides a practical approach for bathymetric retrieval applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Kulikovska, Olha, Pavlo Kolodiy, and Roman Stupen. "UNLOCKING THE POSSIBILITIES OF USING MULTI-SPECTRAL IMAGES FOR ACCURATE CROP ASSESSMENT." Urban development and spatial planning, no. 87 (October 25, 2024): 368–87. https://doi.org/10.32347/2076-815x.2024.87.368-387.

Full text
Abstract:
The paper describes the theoretical and technical aspects of obtaining information on the condition of cereal crops based on medium resolution space multispectral imagery. Experimental studies have been carried out to create digital index maps for fields with grain crops. The high efficiency of the methodology for studying the state of fields using multispectral satellite images with the use of L1C and higher L2A processing level products is proved, the features and limitations of the methodology are shown. The theoretical and methodological foundations for processing multispectral satellite imagery results have been implemented in many application services and software. They allow, first of all, to generate such important products for explaining the state of vegetation in fields as vegetation index maps. Sentinel-2 L1C and Sentinel-2 L2A space imagery is a valuable and inexpensive source of information on crop condition. During the pilot study, Sentinel- 2 L1C satellite image products were processed to bring them up to the level of Sentinel-2 L2A products. This is a complex step related to the correct selection of the atmospheric correction model and the conversion of pixel values into reflectivity of objects on the Earth's surface. Another important step in this process is the ortho-correction of the images using global terrain models. Such correction can be important for the further generation of task maps for soil and crop management. The creation of a time series of vegetation index maps for a field of winter wheat is a very substantive way to build a field history that fully confirms the theory of remote field monitoring. The information obtained allows us to adapt the processing algorithm to refine the measurement and prediction of quantitative indicators (biomass calculation, yield prediction, etc.). The experiment also revealed some positive features and drawbacks of the method of creating index maps. For example, under certain weather conditions in summer, correcting the images for atmospheric influence does not significantly affect the calculated index values. On the contrary, with the specified acquisition period (5 days), it may turn out that there are no more than 3-5 cloudless days in a month during the spring growing season, which significantly affects the efficiency of mapping. Accordingly, more reliable and accurate, but significantly more expensive, is the multispectral field survey from unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
30

Iqbal Januadi Putra, Muhamad, Supriatna, and Wikanti Asriningum. "Hydrocarbon Microseepage Potential Area Exploration Using Sentinel 2 Imagery." E3S Web of Conferences 73 (2018): 03021. http://dx.doi.org/10.1051/e3sconf/20187303021.

Full text
Abstract:
Hydrocarbon microseepage is a common phenomenon occurring in areas with the presence of onshore oil and gas reservoirs, characterized by the abnormal natural surface spectral landscape characteristics of mineral alteration features and geobotanic anomalies that can be detected by satellite imagery. Therefore, this study aims to find spatial models of oil and gas reservoirs through detection approaches of hydrocarbon microseepage and its relation with the physical condition of study area by utilized the satellite imagery. The parameters used in are alteration symptoms of clay-carbonate, ferric iron, and ferrous iron minerals, geobotanic anomaly symptoms, geology characteristic, and geomorphology characteristic. A multispectral satellite imagery of Sentinel 2 was used as an input for the directed principal component analysis (DPCA) method and vegetation index, to detect mineral alteration phenomenon and geobotanic anomaly, respectively. Then each parameter was integrated using fuzzy logic method giving a result of the distribution of hydrocarbon microseepage area. As a results, this study indicates the presence of hydrocarbon microseepage phenomenon in the research area with the extent of 488,3 Ha or 1,46% of the total of research area. The distribution of hydrocarbon microseepage is distributed in area around oil and gas field, and also linear distributed around Merang River. The distribution of hydrocarbon microseepage in study area also agglomerated in Kasai Formation, area near fault, and the area with lacustrine landform characteristics.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Junjie, Xin Shen, and Lin Cao. "Upscaling Forest Canopy Height Estimation Using Waveform-Calibrated GEDI Spaceborne LiDAR and Sentinel-2 Data." Remote Sensing 16, no. 12 (2024): 2138. http://dx.doi.org/10.3390/rs16122138.

Full text
Abstract:
Forest canopy height is a fundamental parameter of forest structure, and plays a pivotal role in understanding forest biomass allocation, carbon stock, forest productivity, and biodiversity. Spaceborne LiDAR (Light Detection and Ranging) systems, such as GEDI (Global Ecosystem Dynamics Investigation), provide large-scale estimation of ground elevation, canopy height, and other forest parameters. However, these measurements may have uncertainties influenced by topographic factors. This study focuses on the calibration of GEDI L2A and L1B data using an airborne LiDAR point cloud, and the combination of Sentinel-2 multispectral imagery, 1D convolutional neural network (CNN), artificial neural network (ANN), and random forest (RF) for upscaling estimated forest height in the Guangxi Gaofeng Forest Farm. First, various environmental (i.e., slope, solar elevation, etc.) and acquisition parameters (i.e., beam type, Solar elevation, etc.) were used to select and optimize the L2A footprint. Second, pseudo-waveforms were simulated from the airborne LiDAR point cloud and were combined with a 1D CNN model to calibrate the L1B waveform data. Third, the forest height extracted from the calibrated L1B waveforms and selected L2A footprints were compared and assessed, utilizing the CHM derived from the airborne LiDAR point cloud. Finally, the forest height data with higher accuracy were combined with Sentinel-2 multispectral imagery for an upscaling estimation of forest height. The results indicate that through optimization using environmental and acquisition parameters, the ground elevation and forest canopy height extracted from the L2A footprint are generally consistent with airborne LiDAR data (ground elevation: R2 = 0.99, RMSE = 4.99 m; canopy height: R2 = 0.42, RMSE = 5.16 m). Through optimizing, ground elevation extraction error was reduced by 45.5% (RMSE), and the canopy height extraction error was reduced by 30.3% (RMSE). After training a 1D CNN model to calibrate the forest height, the forest height information extracted using L1B has a high accuracy (R2 = 0.84, RMSE = 3.13 m). Compared to the optimized L2A data, the RMSE was reduced by 2.03 m. Combining the more accurate L1B forest height data with Sentinel-2 multispectral imagery and using RF and ANN for the upscaled estimation of the forest height, the RF model has the highest accuracy (R2 = 0.64, RMSE = 4.59 m). The results show that the extrapolation and inversion of GEDI, combined with multispectral remote sensing data, serve as effective tools for obtaining forest height distribution on a large scale.
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Xidong, Liangyun Liu, Yuan Gao, Xiao Zhang, and Shuai Xie. "A Novel Classification Extension-Based Cloud Detection Method for Medium-Resolution Optical Images." Remote Sensing 12, no. 15 (2020): 2365. http://dx.doi.org/10.3390/rs12152365.

Full text
Abstract:
Accurate cloud detection using medium-resolution multispectral satellite imagery (such as Landsat and Sentinel data) is always difficult due to the complex land surfaces, diverse cloud types, and limited number of available spectral bands, especially in the case of images without thermal bands. In this paper, a novel classification extension-based cloud detection (CECD) method was proposed for masking clouds in the medium-resolution images. The new method does not rely on thermal bands and can be used for masking clouds in different types of medium-resolution satellite imagery. First, with the support of low-resolution satellite imagery with short revisit periods, cloud and non-cloud pixels were identified in the resampled low-resolution version of the medium-resolution cloudy image. Then, based on the identified cloud and non-cloud pixels and the resampled cloudy image, training samples were automatically collected to develop a random forest (RF) classifier. Finally, the developed RF classifier was extended to the corresponding medium-resolution cloudy image to generate an accurate cloud mask. The CECD method was applied to Landsat-8 and Sentinel-2 imagery to test the performance for different satellite images, and the well-known function of mask (FMASK) method was employed for comparison with our method. The results indicate that CECD is more accurate at detecting clouds in Landsat-8 and Sentinel-2 imagery, giving an average F-measure value of 97.65% and 97.11% for Landsat-8 and Sentinel-2 imagery, respectively, as against corresponding results of 90.80% and 88.47% for FMASK. It is concluded, therefore, that the proposed CECD algorithm is an effective cloud-classification algorithm that can be applied to the medium-resolution optical satellite imagery.
APA, Harvard, Vancouver, ISO, and other styles
33

Gardossi, Anna Lilian, Antonio Tomao, MD Abdul Mueed Choudhury, Ernesto Marcheggiani, and Maurizia Sigura. "Semi-Automatic Extraction of Hedgerows from High-Resolution Satellite Imagery." Remote Sensing 17, no. 9 (2025): 1506. https://doi.org/10.3390/rs17091506.

Full text
Abstract:
Small landscape elements are critical in ecological systems, encompassing vegetated and non-vegetated features. As vegetated elements, hedgerows contribute significantly to biodiversity conservation, erosion protection, and wind speed reduction within agroecosystems. This study focuses on the semi-automatic extraction of hedgerows by applying the Object-Based Image Analysis (OBIA) approach to two multispectral satellite datasets. Multitemporal image data from PlanetScope and Copernicus Sentinel-2 have been used to test the applicability of the proposed approach for detailed land cover mapping, with an emphasis on extracting Small Woody Elements. This study demonstrates significant results in classifying and extracting hedgerows, a smaller landscape element, from both Sentinel-2 and PlanetScope images. A good overall accuracy (OA) was obtained using PlanetScope data (OA = 95%) and Sentinel-2 data (OA = 85%), despite the coarser resolution of the latter. This will undoubtedly demonstrate the effectiveness of the OBIA approach in leveraging freely available image data for detailed land cover mapping, particularly in identifying and classifying hedgerows, thus supporting biodiversity conservation and ecological infrastructure enhancement.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Bo, Timothy L. Hawthorne, Hannah Torres, and Michael Feinman. "Using Object-Oriented Classification for Coastal Management in the East Central Coast of Florida: A Quantitative Comparison between UAV, Satellite, and Aerial Data." Drones 3, no. 3 (2019): 60. http://dx.doi.org/10.3390/drones3030060.

Full text
Abstract:
High resolution mapping of coastal habitats is invaluable for resource inventory, change detection, and inventory of aquaculture applications. However, coastal areas, especially the interior of mangroves, are often difficult to access. An Unmanned Aerial Vehicle (UAV), equipped with a multispectral sensor, affords an opportunity to improve upon satellite imagery for coastal management because of the very high spatial resolution, multispectral capability, and opportunity to collect real-time observations. Despite the recent and rapid development of UAV mapping applications, few articles have quantitatively compared how much improvement there is of UAV multispectral mapping methods compared to more conventional remote sensing data such as satellite imagery. The objective of this paper is to quantitatively demonstrate the improvements of a multispectral UAV mapping technique for higher resolution images used for advanced mapping and assessing coastal land cover. We performed multispectral UAV mapping fieldwork trials over Indian River Lagoon along the central Atlantic coast of Florida. Ground Control Points (GCPs) were collected to generate a rigorous geo-referenced dataset of UAV imagery and support comparison to geo-referenced satellite and aerial imagery. Multi-spectral satellite imagery (Sentinel-2) was also acquired to map land cover for the same region. NDVI and object-oriented classification methods were used for comparison between UAV and satellite mapping capabilities. Compared with aerial images acquired from Florida Department of Environmental Protection, the UAV multi-spectral mapping method used in this study provided advanced information of the physical conditions of the study area, an improved land feature delineation, and a significantly better mapping product than satellite imagery with coarser resolution. The study demonstrates a replicable UAV multi-spectral mapping method useful for study sites that lack high quality data.
APA, Harvard, Vancouver, ISO, and other styles
35

Angelini, R., E. Angelats, G. Luzi, A. Masiero, G. Simarro, and F. Ribas. "Development of Methods for Satellite Shoreline Detection and Monitoring of Megacusp Undulations." Remote Sensing 16, no. 23 (2024): 4553. https://doi.org/10.3390/rs16234553.

Full text
Abstract:
Coastal zones, particularly sandy beaches, are highly dynamic environments subject to a variety of natural and anthropogenic forcings. Instantaneous shoreline is a widely used indicator of beach changes in image-based applications, and it can display undulations at different spatial and temporal scales. Megacusps, periodic seaward and landward shoreline perturbations, are an example of such undulations that can significantly modify beach width and impact its usability. Traditionally, the study of these phenomena relied on video monitoring systems, which provide high-frequency imagery but limited spatial coverage. Instead, this study explored the potential of employing multispectral satellite-derived shorelines, specifically from Sentinel-2 (S2) and PlanetScope (PLN) platforms, for characterizing and monitoring megacusps' formation and their dynamics over time. First, a tool was developed and validated to guarantee accurate shoreline detection, based on a combination of spectral indices, along with both thresholding and unsupervised clustering techniques. Validation of this shoreline detection phase was performed on three micro-tidal Mediterranean beaches, comparing with high-resolution orthomosaics and in-situ GNSS data, obtaining a good subpixel accuracy (with a mean absolute deviation of 1.5-5.5 m depending on the satellite type). Second, a tool for megacusp characterization was implemented and subsequent validation with reference data proved that satellite-derived shorelines could be used to robustly and accurately describe megacusps. The methodology could not only capture their amplitude and wavelength (of the order of 10 and 100 m, respectively) but also monitor their weekly-daily evolution using different potential metrics, thanks to combining S2 and PLN imagery. Our findings demonstrate that multispectral satellite imagery provides a viable and scalable solution for monitoring shoreline megacusp undulations, enhancing our understanding and offering an interesting option for coastal management.
APA, Harvard, Vancouver, ISO, and other styles
36

Banerjee, S., T. Bhadra, A. Saha, et al. "Genus Level Classification of the Mangroves in Indian Sundarbans using Sentinel-2 Multispectral Imagery." IOP Conference Series: Earth and Environmental Science 1382, no. 1 (2024): 012011. http://dx.doi.org/10.1088/1755-1315/1382/1/012011.

Full text
Abstract:
Abstract Mangroves are the most productive ecosystems that provide stabilization to the coastlines, help in carbon sequestration, reduce storm surges, defend the coastal inhabitants and play a role in sustaining the local economy. Mangroves are halophytic in nature that typically thrive along tropical and subtropical coastlines in the saline intertidal zone. This paper explores the potentiality of Sentinel-2 MSI Imagery in separating different mangrove genus that has been evaluated using different classification algorithms like Maximum Likelihood Classifier (MLC), Mahalanobis Distance (MD), Minimum Distance (MD), Spectral Angle Mapper (SAM), knowledge-based classifier such as Random Forest (RF) and Support Vector Machine (SVM). The estimated accuracy is higher for Random Forest (88.50%) followed by Support Vector Machine (85.30%) and Maximum Likelihood Classifier (85.10%) as compared to Mahalanobis Distance (81.10%), Spectral Angle Mapper (71.20%), and Minimum Distance (75.30%). The result showed that Sentinel-2 Multispectral Imagery can efficiently discriminate 15 different mangrove genera distributed in the entire Indian Sundarbans and can simultaneously replace the barriers in terms of cost and availability of the spaceborne and airborne hyperspectral sensors.
APA, Harvard, Vancouver, ISO, and other styles
37

Hasanlou, Mahdi, Reza Shah-Hosseini, Seyd Teymoor Seydi, Sadra Karimzadeh, and Masashi Matsuoka. "Earthquake Damage Region Detection by Multitemporal Coherence Map Analysis of Radar and Multispectral Imagery." Remote Sensing 13, no. 6 (2021): 1195. http://dx.doi.org/10.3390/rs13061195.

Full text
Abstract:
Earth, as humans’ habitat, is constantly affected by natural events, such as floods, earthquakes, thunder, and drought among which earthquakes are considered one of the deadliest and most catastrophic natural disasters. The Iran-Iraq earthquake occurred in Kermanshah Province, Iran in November 2017. It was a 7.4-magnitude seismic event that caused immense damages and loss of life. The rapid detection of damages caused by earthquakes is of great importance for disaster management. Thanks to their wide coverage, high resolution, and low cost, remote-sensing images play an important role in environmental monitoring. This study presents a new damage detection method at the unsupervised level, using multitemporal optical and radar images acquired through Sentinel imagery. The proposed method is applied in two main phases: (1) automatic built-up extraction using spectral indices and active learning framework on Sentinel-2 imagery; (2) damage detection based on the multitemporal coherence map clustering and similarity measure analysis using Sentinel-1 imagery. The main advantage of the proposed method is that it is an unsupervised method with simple usage, a low computing burden, and using medium spatial resolution imagery that has good temporal resolution and is operative at any time and in any atmospheric conditions, with high accuracy for detecting deformations in buildings. The accuracy analysis of the proposed method found it visually and numerically comparable to other state-of-the-art methods for built-up area detection. The proposed method is capable of detecting built-up areas with an accuracy of more than 96% and a kappa of about 0.89 in overall comparison to other methods. Furthermore, the proposed method is also able to detect damaged regions compared to other state-of-the-art damage detection methods with an accuracy of more than 70%.
APA, Harvard, Vancouver, ISO, and other styles
38

Armannsson, Sveinn E., Magnus O. Ulfarsson, and Jakob Sigurdsson. "A Learned Reduced-Rank Sharpening Method for Multiresolution Satellite Imagery." Remote Sensing 17, no. 3 (2025): 432. https://doi.org/10.3390/rs17030432.

Full text
Abstract:
This paper implements an unsupervised single-image sharpening method for multispectral images, focusing on Sentinel-2 and Landsat 8 imagery. Our method combines traditional model-based methods with neural network optimization techniques. Our method solves the same optimization problem as traditional model-based methods while leveraging neural network optimization techniques through a customized U-Net architecture and specialized loss function. The key innovation lies in simultaneously optimizing a low-rank approximation of the target image and a linear transformation from the subspace to the sharpened image within an unsupervised training framework. Our method offers several distinct advantages: it requires no external training data beyond the image being processed, it provides fast training speeds through a compact, interpretable network model, and most importantly, it adapts to different input images without requiring extensive parameter tuning—a common limitation of traditional methods. The method was developed with a focus on sharpening Sentinel-2 imagery. The Copernicus Sentinel-2 satellite constellation captures images at three different spatial resolutions, 10, 20, and 60 m, and many applications benefit from a unified 10 m resolution. Still, the method’s effectiveness extends to other remote sensing tasks, achieving competitive results in both sharpening and multisensor fusion scenarios. It is evaluated using both real and simulated data, and its versatility is shown through successful applications to Sentinel-2 sharpening and Sentinel-2/Landsat 8 fusion. In comparison with leading methods, it is shown to give excellent results.
APA, Harvard, Vancouver, ISO, and other styles
39

Salgueiro Romero, Luis, Javier Marcello, and Verónica Vilaplana. "Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks." Remote Sensing 12, no. 15 (2020): 2424. http://dx.doi.org/10.3390/rs12152424.

Full text
Abstract:
Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of these images might not be sufficient. On the other hand, WorldView commercial satellites offer multi-spectral images with a very high spatial resolution, typically less than 2 m, but their use can be impractical for large areas or multi-temporal analysis due to their high cost. To exploit the free availability of Sentinel imagery, it is worth considering deep learning techniques for single-image super-resolution tasks, allowing the spatial enhancement of low-resolution (LR) images by recovering high-frequency details to produce high-resolution (HR) super-resolved images. In this work, we implement and train a model based on the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with pairs of WorldView-Sentinel images to generate a super-resolved multispectral Sentinel-2 output with a scaling factor of 5. Our model, named RS-ESRGAN, removes the upsampling layers of the network to make it feasible to train with co-registered remote sensing images. Results obtained outperform state-of-the-art models using standard metrics like PSNR, SSIM, ERGAS, SAM and CC. Moreover, qualitative visual analysis shows spatial improvements as well as the preservation of the spectral information, allowing the super-resolved Sentinel-2 imagery to be used in studies requiring very high spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
40

Emmanuel, Chigozie Dike* Bright Godfrey Ameme Evangeline Nkiruka Le-ol Anthony. "Comparative Analysis of Multi-Spectral Shoreline Delineation Using Landsat-8, Sentinel-2, and PlanetScope Imageries in Coastal Environments of Nigeria." International Journal of Scientific Research and Technology 2, no. 2 (2025): 159–74. https://doi.org/10.5281/zenodo.14907334.

Full text
Abstract:
Changes in the shoreline position are a result of climate change-induced sea level rise and morphological changes caused by coastal processes. The delineation of shoreline positions relies on robust techniques and data sources, with remote sensing being particularly advantageous due to its cost-effectiveness and technological advancements. The study focuses on the eastern Niger Delta region of Nigeria, utilising mid-resolution multispectral datasets from Landsat-8 OLI, Sentinel-2 MSI, and PlanetScope to compare shoreline positions derived from different water indices (NDVI and NDWI) and classification methods. The research utilises mid-resolution multispectral datasets from Landsat-8 OLI, Sentinel-2 MSI, and PlanetScope to compare shoreline positions derived from different water indices (NDVI and NDWI) and threshold values. The methodology involves preprocessing optical imagery, applying water indices for shoreline delineation, and employing unsupervised classification techniques. The accuracy of shoreline positions is assessed using metrics such as mean error position (MEP) and root mean square error (RMSE), revealing significant discrepancies between datasets, particularly between high-resolution PlanetScope and coarser Landsat-8 imagery. Results indicate that the highest positional accuracy was achieved between PlanetScope and Sentinel-2, while Landsat-8 showed higher errors due to its coarser resolution. This study indicates that higher-resolution sensors provide more precise shoreline mapping, essential for effective coastal management. The study underscores the necessity of sensor-specific calibration and highlights the potential of integrating multiple satellite datasets to enhance the accuracy of shoreline monitoring and environmental assessments.
APA, Harvard, Vancouver, ISO, and other styles
41

Heiselberg, Henning. "A Direct and Fast Methodology for Ship Recognition in Sentinel-2 Multispectral Imagery." Remote Sensing 8, no. 12 (2016): 1033. http://dx.doi.org/10.3390/rs8121033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Zheng, Qiong, Wenjiang Huang, Ximin Cui, Yue Shi, and Linyi Liu. "New Spectral Index for Detecting Wheat Yellow Rust Using Sentinel-2 Multispectral Imagery." Sensors 18, no. 3 (2018): 868. http://dx.doi.org/10.3390/s18030868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Seydi, S. T., and H. Arefi. "A COMPARISON OF DEEP LEARNING-BASED SUPER-RESOLUTION FRAMEWORKS FOR SENTINEL-2 IMAGERY IN URBAN AREAS." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-1/W1-2023 (December 5, 2023): 1021–26. http://dx.doi.org/10.5194/isprs-annals-x-1-w1-2023-1021-2023.

Full text
Abstract:
Abstract. The high-resolution images are in demand for many applications in the monitoring of urban areas. The advent of remote sensing satellites such as Sentinel-2 has made data more accessible as it provides free multispectral imagery. However, the spatial resolution of these images is not sufficient for many of the tasks. With the advent of deep learning techniques, significant progress has been made in the field of super-resolution, which has shown promising results in the improvement of the spatial resolution of satellite images. In this study, we compare four the most common deep learning-based models for the super-resolution of Sentinel-2 imagery in dense urban areas using aerial images. These methods are including enhanced deep super-resolution network (EDSR), super-resolution generative adversarial networks (ESRGAN), residual feature distillation network (RFDN), and Super-Resolution Convolutional Neural Network (SRCNN). To determine the effectiveness of the models in improving image resolution, they were evaluated using visual quality and quantitative metrics. The super-resolution results show that deep learning-based models have high potential for the generation of the high-resolution dataset from Sentinel-2 imagery in urban areas. The RFDN outperformed other deep learning-based models that achieved the peak signal-to-noise ratio (PSNR) more than 17.8.
APA, Harvard, Vancouver, ISO, and other styles
44

Hernando, A. M., A. M. Piano, A. C. Blanco, J. M. Medina, and A. Y. Manuel. "COMPARATIVE ANALYSIS OF PRISMA HYPERSPECTRAL AND SENTINEL-2 MULTISPECTRAL IMAGES FOR CHLOROPHYLL-A AND TURBIDITY MAPPING OF TAAL LAKE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W8-2023 (April 25, 2024): 293–99. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w8-2023-293-2024.

Full text
Abstract:
Abstract. Freshwater bodies like Taal Lake play a pivotal role in providing essential resources like fresh drinking water and in supporting local livelihoods. This study aimed to examine the environmental conditions of Taal Lake by quantifying chlorophyll-a (chl-a) and turbidity levels with Water Colour Simulator (WASI) and water quality indices, specifically with Sentinel-2 and PRISMA imagery. The results from the satellite image-derived data revealed discernible variations in chlorophyll-a and turbidity concentrations across different regions of Taal Lake. Higher chlorophyll-a was consistently observed in the western regions (1.3–5.9 μg/L for Sentinel-2; 1.4–5.4 μg/L for PRISMA), while lower concentrations were found in the south (1.7–3.4 μg/L for Sentinel-2; 1.9–3.3 μg/L for PRISMA). Meanwhile, turbidity values were higher in the eastern and northeastern parts (0.15–0.59 mg/L for Sentinel-2; 0.13–0.51 mg/L for PRISMA). Water quality indices (NDTI & NDCI) also supported these findings. The findings of this comparative analysis between PRISMA hyperspectral and Sentinel-2 multispectral imagery demonstrate the potential of both satellite systems in providing valuable insights into the spatial distribution of water quality parameters in Taal Lake. Nonetheless, discrepancies observed in the scatter plots and inversion failures in PRISMA underscore the need for further research and refinement in utilizing PRISMA data with the WASI model. This study highlights the significance of freshwater bodies and the importance of monitoring their health for the well-being of communities and the environment.
APA, Harvard, Vancouver, ISO, and other styles
45

Popov, Mykhailo, Sergey Stankevich, Olga Sedlerova, et al. "Prospect of involving Sentinel-2 imagery for analysis of possible causes of chemical emissions at the Crimean Titan plant." Ukrainian journal of remote sensing, no. 18 (November 9, 2018): 29–31. http://dx.doi.org/10.36023/ujrs.2018.18.133.

Full text
Abstract:
The paper proposes an approach to assessing the humidity within the acid storage tank of the “Crimean Titan” plant based on the water index MDNWI, calculated using Sentinel-2 multispectral images as one of the likely causes of chemical pollution, which was observed at the end of August 2018.
APA, Harvard, Vancouver, ISO, and other styles
46

Ma, Jinge, Haoran Shen, Yuanxiu Cai, et al. "UCTNet with Dual-Flow Architecture: Snow Coverage Mapping with Sentinel-2 Satellite Imagery." Remote Sensing 15, no. 17 (2023): 4213. http://dx.doi.org/10.3390/rs15174213.

Full text
Abstract:
Satellite remote sensing (RS) has been drawing considerable research interest in land-cover classification due to its low price, short revisit time, and large coverage. However, clouds pose a significant challenge, occluding the objects on satellite RS images. In addition, snow coverage mapping plays a vital role in studying hydrology and climatology and investigating crop disease overwintering for smart agriculture. Distinguishing snow from clouds is challenging since they share similar color and reflection characteristics. Conventional approaches with manual thresholding and machine learning algorithms (e.g., SVM and Random Forest) could not fully extract useful information, while current deep-learning methods, e.g., CNNs or Transformer models, still have limitations in fully exploiting abundant spatial/spectral information of RS images. Therefore, this work aims to develop an efficient snow and cloud classification algorithm using satellite multispectral RS images. In particular, we propose an innovative algorithm entitled UCTNet by adopting a dual-flow structure to integrate information extracted via Transformer and CNN branches. Particularly, CNN and Transformer integration Module (CTIM) is designed to maximally integrate the information extracted via two branches. Meanwhile, Final Information Fusion Module and Auxiliary Information Fusion Head are designed for better performance. The four-band satellite multispectral RS dataset for snow coverage mapping is adopted for performance evaluation. Compared with previous methods (e.g., U-Net, Swin, and CSDNet), the experimental results show that the proposed UCTNet achieves the best performance in terms of accuracy (95.72%) and mean IoU score (91.21%) while with the smallest model size (3.93 M). The confirmed efficiency of UCTNet shows great potential for dual-flow architecture on snow and cloud classification.
APA, Harvard, Vancouver, ISO, and other styles
47

Williams, Michael, Niall G. Burnside, Matthew Brolly, and Chris B. Joyce. "Investigating the Role of Cover-Crop Spectra for Vineyard Monitoring from Airborne and Spaceborne Remote Sensing." Remote Sensing 16, no. 21 (2024): 3942. http://dx.doi.org/10.3390/rs16213942.

Full text
Abstract:
The monitoring of grape quality parameters within viticulture using airborne remote sensing is an increasingly important aspect of precision viticulture. Airborne remote sensing allows high volumes of spatial consistent data to be collected with improved efficiency over ground-based surveys. Spectral data can be used to understand the characteristics of vineyards, including the characteristics and health of the vines. Within viticultural remote sensing, the use of cover-crop spectra for monitoring is often overlooked due to the perceived noise it generates within imagery. However, within viticulture, the cover crop is a widely used and important management tool. This study uses multispectral data acquired by a high-resolution uncrewed aerial vehicle (UAV) and Sentinel-2 MSI to explore the benefit that cover-crop pixels could have for grape yield and quality monitoring. This study was undertaken across three growing seasons in the southeast of England, at a large commercial wine producer. The site was split into a number of vineyards, with sub-blocks for different vine varieties and rootstocks. Pre-harvest multispectral UAV imagery was collected across three vineyard parcels. UAV imagery was radiometrically corrected and stitched to create orthomosaics (red, green, and near-infrared) for each vineyard and survey date. Orthomosaics were segmented into pure cover-cropuav and pure vineuav pixels, removing the impact that mixed pixels could have upon analysis, with three vegetation indices (VIs) constructed from the segmented imagery. Sentinel-2 Level 2a bottom of atmosphere scenes were also acquired as close to UAV surveys as possible. In parallel, the yield and quality surveys were undertaken one to two weeks prior to harvest. Laboratory refractometry was performed to determine the grape total acid, total soluble solids, alpha amino acids, and berry weight. Extreme gradient boosting (XGBoost v2.1.1) was used to determine the ability of remote sensing data to predict the grape yield and quality parameters. Results suggested that pure cover-cropuav was a successful predictor of grape yield and quality parameters (range of R2 = 0.37–0.45), with model evaluation results comparable to pure vineuav and Sentinel-2 models. The analysis also showed that, whilst the structural similarity between the both UAV and Sentinel-2 data was high, the cover crop is the most influential spectral component within the Sentinel-2 data. This research presents novel evidence for the ability of cover-cropuav to predict grape yield and quality. Moreover, this finding then provides a mechanism which explains the success of the Sentinel-2 modelling of grape yield and quality. For growers and wine producers, creating grape yield and quality prediction models through moderate-resolution satellite imagery would be a significant innovation. Proving more cost-effective than UAV monitoring for large vineyards, such methodologies could also act to bring substantial cost savings to vineyard management.
APA, Harvard, Vancouver, ISO, and other styles
48

Farahnkain, Fahimeh, Nike Luodes, and Teemu Karlsson. "Machine Learning Algorithms for Acid Mine Drainage Mapping Using Sentinel-2 and Worldview-3." Remote Sensing 16, no. 24 (2024): 4680. https://doi.org/10.3390/rs16244680.

Full text
Abstract:
Acid Mine Drainage (AMD) presents significant environmental challenges, particularly in regions with extensive mining activities. Effective monitoring and mapping of AMD are crucial for mitigating its detrimental impacts on ecosystems and water quality. This study investigates the application of Machine Learning (ML) algorithms to map AMD by fusing multispectral imagery from Sentinel-2 with high-resolution imagery from WorldView-3. We applied three widely used ML models—Random Forest (RF), K-Nearest Neighbor (KNN), and Multilayer Perceptron (MLP)—to address both classification and regression tasks. The classification models aimed to distinguish between AMD and non-AMD samples, while the regression models provided quantitative pH mapping. Our experiments were conducted on three lakes in the Outokumpu mining area in Finland, which are affected by mine waste and acidic drainage. Our results indicate that combining Sentinel-2 and WorldView-3 data significantly enhances the accuracy of AMD detection. This combined approach leverages the strengths of both datasets, providing a more robust and precise assessment of AMD impacts.
APA, Harvard, Vancouver, ISO, and other styles
49

Dlamini, Mandla, George Chirima, Mbulisi Sibanda, Elhadi Adam, and Timothy Dube. "Characterizing Leaf Nutrients of Wetland Plants and Agricultural Crops with Nonparametric Approach Using Sentinel-2 Imagery Data." Remote Sensing 13, no. 21 (2021): 4249. http://dx.doi.org/10.3390/rs13214249.

Full text
Abstract:
In arid environments of the world, particularly in sub-Saharan Africa and Asia, floodplain wetlands are a valuable agricultural resource. However, the water reticulation role by wetlands and crop production can negatively impact wetland plants. Knowledge on the foliar biochemical elements of wetland plants enhances understanding of the impacts of agricultural practices in wetlands. This study thus used Sentinel-2 multispectral data to predict seasonal variations in the concentrations of nine foliar biochemical elements in plant leaves of key floodplain wetland vegetation types and crops in the uMfolozi floodplain system (UFS). Nutrient concentrations in different floodplain plant species were estimated using Sentinel-2 multispectral data derived vegetation indices in concert with the random forest regression. The results showed a mean R2 of 0.87 and 0.86 for the dry winter and wet summer seasons, respectively. However, copper, sulphur, and magnesium were poorly correlated (R2 ≤ 0.5) with vegetation indices during the summer season. The average % relative root mean square errors (RMSE’s) for seasonal nutrient estimation accuracies for crops and wetland vegetation were 15.2 % and 26.8%, respectively. There was a significant difference in nutrient concentrations between the two plant types, (R2 = 0.94 (crops), R2 = 0.84 (vegetation). The red-edge position 1 (REP1) and the normalised difference vegetation index (NDVI) were the best nutrient predictors. These results demonstrate the usefulness of Sentinel-2 imagery and random forests regression in predicting seasonal, nutrient concentrations as well as the accumulation of chemicals in wetland vegetation and crops.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Aizhu, Genyun Sun, Ping Ma, et al. "Coastal Wetland Mapping with Sentinel-2 MSI Imagery Based on Gravitational Optimized Multilayer Perceptron and Morphological Attribute Profiles." Remote Sensing 11, no. 8 (2019): 952. http://dx.doi.org/10.3390/rs11080952.

Full text
Abstract:
Coastal wetland mapping plays an essential role in monitoring climate change, the hydrological cycle, and water resources. In this study, a novel classification framework based on the gravitational optimized multilayer perceptron classifier and extended multi-attribute profiles (EMAPs) is presented for coastal wetland mapping using Sentinel-2 multispectral instrument (MSI) imagery. In the proposed method, the morphological attribute profiles (APs) are firstly extracted using four attribute filters based on the characteristics of wetlands in each band from Sentinel-2 imagery. These APs form a set of EMAPs which comprehensively represent the irregular wetland objects in multiscale and multilevel. The EMAPs and original spectral features are then classified with a new multilayer perceptron (MLP) classifier whose parameters are optimized by a stability-constrained adaptive alpha for a gravitational search algorithm. The performance of the proposed method was investigated using Sentinel-2 MSI images of two coastal wetlands, i.e., the Jiaozhou Bay and the Yellow River Delta in Shandong province of eastern China. Comparisons with four other classifiers through visual inspection and quantitative evaluation verified the superiority of the proposed method. Furthermore, the effectiveness of different APs in EMAPs were also validated. By combining the developed EMAPs features and novel MLP classifier, complicated wetland types with high within-class variability and low between-class disparity were effectively discriminated. The superior performance of the proposed framework makes it available and preferable for the mapping of complicated coastal wetlands using Sentinel-2 data and other similar optical imagery.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!