To see the other types of publications on this topic, follow the link: Multiscale remote sensing.

Journal articles on the topic 'Multiscale remote sensing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multiscale remote sensing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mesev, V. "MULTISCALE AND MULTITEMPORAL URBAN REMOTE SENSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXIX-B2 (July 25, 2012): 17–21. http://dx.doi.org/10.5194/isprsarchives-xxxix-b2-17-2012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

dos Santos, Jefersson Alex, Philippe-Henri Gosselin, Sylvie Philipp-Foliguet, Ricardo da S. Torres, and Alexandre Xavier Falao. "Multiscale Classification of Remote Sensing Images." IEEE Transactions on Geoscience and Remote Sensing 50, no. 10 (October 2012): 3764–75. http://dx.doi.org/10.1109/tgrs.2012.2186582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Lingling, Pujiang Liang, Jingjing Ma, Licheng Jiao, Xiaohui Guo, Fang Liu, and Chen Sun. "A Multiscale Self-Adaptive Attention Network for Remote Sensing Scene Classification." Remote Sensing 12, no. 14 (July 10, 2020): 2209. http://dx.doi.org/10.3390/rs12142209.

Full text
Abstract:
High-resolution optical remote sensing image classification is an important research direction in the field of computer vision. It is difficult to extract the rich semantic information from remote sensing images with many objects. In this paper, a multiscale self-adaptive attention network (MSAA-Net) is proposed for the optical remote sensing image classification, which includes multiscale feature extraction, adaptive information fusion, and classification. In the first part, two parallel convolution blocks with different receptive fields are adopted to capture multiscale features. Then, the squeeze process is used to obtain global information and the excitation process is used to learn the weights in different channels, which can adaptively select useful information from multiscale features. Furthermore, the high-level features are classified by many residual blocks with an attention mechanism and a fully connected layer. Experiments were conducted using the UC Merced, NWPU, and the Google SIRI-WHU datasets. Compared to the state-of-the-art methods, the MSAA-Net has great effect and robustness, with average accuracies of 94.52%, 95.01%, and 95.21% on the three widely used remote sensing datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Wulamu, Aziguli, Zuxian Shi, Dezheng Zhang, and Zheyu He. "Multiscale Road Extraction in Remote Sensing Images." Computational Intelligence and Neuroscience 2019 (July 10, 2019): 1–9. http://dx.doi.org/10.1155/2019/2373798.

Full text
Abstract:
Recent advances in convolutional neural networks (CNNs) have shown impressive results in semantic segmentation. Among the successful CNN-based methods, U-Net has achieved exciting performance. In this paper, we proposed a novel network architecture based on U-Net and atrous spatial pyramid pooling (ASPP) to deal with the road extraction task in the remote sensing field. On the one hand, U-Net structure can effectively extract valuable features; on the other hand, ASPP is able to utilize multiscale context information in remote sensing images. Compared to the baseline, this proposed model has improved the pixelwise mean Intersection over Union (mIoU) of 3 points. Experimental results show that the proposed network architecture can deal with different types of road surface extraction tasks under various terrains in Yinchuan city, solve the road connectivity problem to some extent, and has certain tolerance to shadows and occlusion.
APA, Harvard, Vancouver, ISO, and other styles
5

Vannier, Clémence, Chloé Vasseur, Laurence Hubert-Moy, and Jacques Baudry. "Multiscale ecological assessment of remote sensing images." Landscape Ecology 26, no. 8 (July 6, 2011): 1053–69. http://dx.doi.org/10.1007/s10980-011-9626-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Yong, Wenkai Zhang, Zhengyuan Zhang, Xin Gao, and Xian Sun. "Multiscale Multiinteraction Network for Remote Sensing Image Captioning." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15 (2022): 2154–65. http://dx.doi.org/10.1109/jstars.2022.3153636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Yani, Jinfang Dong, and Bo Wang. "Feature Matching Optimization of Multimedia Remote Sensing Images Based on Multiscale Edge Extraction." Computational Intelligence and Neuroscience 2022 (June 2, 2022): 1–7. http://dx.doi.org/10.1155/2022/1764507.

Full text
Abstract:
In order to solve the problem of low efficiency of image feature matching in traditional remote sensing image database, this paper proposes the feature matching optimization of multimedia remote sensing images based on multiscale edge extraction, expounds the basic theory of multiscale edge, and then registers multimedia remote sensing images based on the selection of optimal control points. In this paper, 100 remote sensing images with a size of 3619 ∗ 825 with a resolution of 30 m are selected as experimental data. The computer is configured with 2.9 ghz CPU, 16 g memory, and i7 processor. The research mainly includes two parts: image matching efficiency analysis of multiscale model; matching accuracy analysis of multiscale model and formulation of model parameters. The results show that when the amount of image data is large, feature matching takes more time. With the increase of sampling rate, the amount of image data decreases rapidly, and the feature matching time also shortens rapidly, which provides a theoretical basis for the multiscale model to improve the matching efficiency. The data size is the same, 3619 × 1825, which makes the matching time between images have little difference. Therefore, the matching time increases linearly with the increase of the number of images in the database. When the amount of image data in the database is large, a higher number of layers should be used; when the amount of image data in the database is small, the number of layers of the model should be reduced to ensure the accuracy of matching. The availability of the proposed method is proved.
APA, Harvard, Vancouver, ISO, and other styles
8

Cui, Hao, Peng Jia, Guo Zhang, Yong-Hua Jiang, Li-Tao Li, Jing-Yin Wang, and Xiao-Yun Hao. "Multiscale Intensity Propagation to Remove Multiplicative Stripe Noise From Remote Sensing Images." IEEE Transactions on Geoscience and Remote Sensing 58, no. 4 (April 2020): 2308–23. http://dx.doi.org/10.1109/tgrs.2019.2947599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

dos Santos, Philippe-Henri Gosselin, Sylvie Philipp-Foliguet, Ricardo da S. Torres, and Alexandre Xavier Falcao. "Interactive Multiscale Classification of High-Resolution Remote Sensing Images." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 6, no. 4 (August 2013): 2020–34. http://dx.doi.org/10.1109/jstars.2012.2237013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sheng Zheng, Wen-zhong Shi, Jian Liu, and Jinwen Tian. "Remote Sensing Image Fusion Using Multiscale Mapped LS-SVM." IEEE Transactions on Geoscience and Remote Sensing 46, no. 5 (May 2008): 1313–22. http://dx.doi.org/10.1109/tgrs.2007.912737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Teng, Lu, Feng Xue, and Qiudi Bai. "Remote Sensing Image Enhancement Via Edge-Preserving Multiscale Retinex." IEEE Photonics Journal 11, no. 2 (April 2019): 1–10. http://dx.doi.org/10.1109/jphot.2019.2902959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Tongyu, Jie Chen, Pu Cheng, and Lu Yu. "A Retinex-based Enhancement method for Ocean Remote Sensing Image." E3S Web of Conferences 290 (2021): 02004. http://dx.doi.org/10.1051/e3sconf/202129002004.

Full text
Abstract:
In order to effectively monitor important sea areas, one of the key issues is the detection of small dynamic targets such as ships. Besides monitoring the hull itself, the monitoring of small targets can also be achieved by detecting ship wakes, which means the sea image with clear texture feature is required. This paper firstly reviews the multiscale retinex (MSR) method, commonly used to enhance the image contrast. Then, it proposes a novel contrast enhancement algorithm base on the subband-decomposed multiscale retinex (SDMSR) method. Experimental results show that our proposed method can make a remarkable enhancement effect for ocean remote sensing images with clouds and whitecap, etc.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhou, Liming, Haoxin Yan, Yingzi Shan, Chang Zheng, Yang Liu, Xianyu Zuo, and Baojun Qiao. "Aircraft Detection for Remote Sensing Images Based on Deep Convolutional Neural Networks." Journal of Electrical and Computer Engineering 2021 (August 11, 2021): 1–16. http://dx.doi.org/10.1155/2021/4685644.

Full text
Abstract:
Aircraft detection for remote sensing images, as one of the fields of computer vision, is one of the significant tasks of image processing based on deep learning. Recently, many high-performance algorithms for aircraft detection have been developed and applied in different scenarios. However, the proposed algorithms still have a series of problems; for instance, the algorithms will miss some small-scale aircrafts when applied to the remote sensing image. There are two main reasons for the problem; one reason is that the aircrafts in the remote sensing image are usually small in size, leading to detecting difficulty. The other reason is that the background of the remote sensing image is usually complex, so the algorithms applied to the scenario are easy to be affected by the background. To address the problem of small size, this paper proposes the Multiscale Detection Network (MSDN) which introduces a multiscale detection architecture to detect small-scale aircrafts. With the intention to resist the background noise, this paper proposes the Deeper and Wider Module (DAWM) which increases the perceptual field of the network to alleviate the affection. Besides, to address the two problems simultaneously, this paper introduces the DAWM into the MSDN and names the novel network structure as Multiscale Refined Detection Network (MSRDN). The experimental results show that the MSRDN method has detected the small-scale aircrafts that other algorithms missed and the performance indicators have higher performance than other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhou, Liming, Yahui Li, Xiaohan Rao, Cheng Liu, Xianyu Zuo, and Yang Liu. "Ship Target Detection in Optical Remote Sensing Images Based on Multiscale Feature Enhancement." Computational Intelligence and Neuroscience 2022 (October 6, 2022): 1–20. http://dx.doi.org/10.1155/2022/2605140.

Full text
Abstract:
Due to the multiscale characteristics of ship targets in ORSIs (optical remote sensing images), ship target detection in ORSIs based on depth learning is still facing great challenges. Aiming at the low accuracy of multiscale ship target detection in ORSIs, this paper proposes a ship target detection algorithm based on multiscale feature enhancement based on YOLO v4. Firstly, an improved mixed convolution is introduced into the IRes (inverted residual block) to form an MIRes (mixed inverted residual block). The MIRes are used to replace the Res (residual block) in the deep CSP module of the backbone network to enhance the multiscale feature extraction capability of the backbone network. Secondly, for different scale feature maps’ perception fields, feature information, and the scale of the detected objects, the multiscale feature enhancement modules—SFEM (small scale feature enhancement module) and MFEM (middle scale feature enhancement module)—are proposed to enhance the feature information of the middle- and low-level feature maps, respectively, and then the enhanced feature maps are sent to the detection head for detection. Finally, experiments were implemented on the LEVIR-ship dataset and the NWPU VHR-10 dataset. The accuracy of the proposed algorithm in ship target detection reached 79.55% and 90.70%, respectively, which is improved by 3.25% and 3.56% compared with YOLO v4.
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Ruihong, Huajun Wang, and Ping Luo. "Remote sensing image super-resolution using multi-scale convolutional sparse coding network." PLOS ONE 17, no. 10 (October 26, 2022): e0276648. http://dx.doi.org/10.1371/journal.pone.0276648.

Full text
Abstract:
With the development of convolutional neural networks, impressive success has been achieved in remote sensing image super-resolution. However, the performance of super-resolution reconstruction is unsatisfactory due to the lack of details in remote sensing images when compared to natural images. Therefore, this paper presents a novel multiscale convolutional sparse coding network (MCSCN) to carry out the remote sensing images SR reconstruction with rich details. The MCSCN, which consists of a multiscale convolutional sparse coding module (MCSCM) with dictionary convolution units, can improve the extraction of high frequency features. We can obtain more plentiful feature information by combining multiple sizes of sparse features. Finally, a layer based on sub-pixel convolution that combines global and local features takes as the reconstruction block. The experimental results show that the MCSCN gains an advantage over several existing state-of-the-art methods in terms of peak signal-to-noise ratio and structural similarity.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Xin, Lei Hu, Yongmei Zhang, and Yunqing Li. "MRA-SNet: Siamese Networks of Multiscale Residual and Attention for Change Detection in High-Resolution Remote Sensing Images." Remote Sensing 13, no. 22 (November 11, 2021): 4528. http://dx.doi.org/10.3390/rs13224528.

Full text
Abstract:
Remote sensing image change detection (CD) is an important task in remote sensing image analysis and is essential for an accurate understanding of changes in the Earth’s surface. The technology of deep learning (DL) is becoming increasingly popular in solving CD tasks for remote sensing images. Most existing CD methods based on DL tend to use ordinary convolutional blocks to extract and compare remote sensing image features, which cannot fully extract the rich features of high-resolution (HR) remote sensing images. In addition, most of the existing methods lack robustness to pseudochange information processing. To overcome the above problems, in this article, we propose a new method, namely MRA-SNet, for CD in remote sensing images. Utilizing the UNet network as the basic network, the method uses the Siamese network to extract the features of bitemporal images in the encoder separately and perform the difference connection to better generate difference maps. Meanwhile, we replace the ordinary convolution blocks with Multi-Res blocks to extract spatial and spectral features of different scales in remote sensing images. Residual connections are used to extract additional detailed features. To better highlight the change region features and suppress the irrelevant region features, we introduced the Attention Gates module before the skip connection between the encoder and the decoder. Experimental results on a public dataset of remote sensing image CD show that our proposed method outperforms other state-of-the-art (SOTA) CD methods in terms of evaluation metrics and performance.
APA, Harvard, Vancouver, ISO, and other styles
17

Yuan, Min, Dingbang Ren, Qisheng Feng, Zhaobin Wang, Yongkang Dong, Fuxiang Lu, and Xiaolin Wu. "MCAFNet: A Multiscale Channel Attention Fusion Network for Semantic Segmentation of Remote Sensing Images." Remote Sensing 15, no. 2 (January 6, 2023): 361. http://dx.doi.org/10.3390/rs15020361.

Full text
Abstract:
Semantic segmentation for urban remote sensing images is one of the most-crucial tasks in the field of remote sensing. Remote sensing images contain rich information on ground objects, such as shape, location, and boundary and can be found in high-resolution remote sensing images. It is exceedingly challenging to identify remote sensing images because of the large intraclass variance and low interclass variance caused by these objects. In this article, we propose a multiscale hierarchical channel attention fusion network model based on a transformer and CNN, which we name the multiscale channel attention fusion network (MCAFNet). MCAFNet uses ResNet-50 and Vit-B/16 to learn the global–local context, and this strengthens the semantic feature representation. Specifically, a global–local transformer block (GLTB) is deployed in the encoder stage. This design handles image details at low resolution and extracts global image features better than previous methods. In the decoder module, a channel attention optimization module and a fusion module are added to better integrate high- and low-dimensional feature maps, which enhances the network’s ability to obtain small-scale semantic information. The proposed method is conducted on the ISPRS Vaihingen and Potsdam datasets. Both quantitative and qualitative evaluations show the competitive performance of MCAFNet in comparison to the performance of the mainstream methods. In addition, we performed extensive ablation experiments on the Vaihingen dataset in order to test the effectiveness of multiple network components.
APA, Harvard, Vancouver, ISO, and other styles
18

Jie, Yongshi, Hongyan He, Kun Xing, Anzhi Yue, Wei Tan, Chunyu Yue, Cheng Jiang, and Xuan Chen. "MECA-Net: A MultiScale Feature Encoding and Long-Range Context-Aware Network for Road Extraction from Remote Sensing Images." Remote Sensing 14, no. 21 (October 25, 2022): 5342. http://dx.doi.org/10.3390/rs14215342.

Full text
Abstract:
Road extraction from remote sensing images is significant for urban planning, intelligent transportation, and vehicle navigation. However, it is challenging to automatically extract roads from remote sensing images because the scale difference of roads in remote sensing images varies greatly, and slender roads are difficult to identify. Moreover, the road in the image is often blocked by the shadows of trees and buildings, which results in discontinuous and incomplete extraction results. To solve the above problems, this paper proposes a multiscale feature encoding and long-range context-aware network (MECA-Net) for road extraction. MECA-Net adopts an encoder–decoder structure and contains two core modules. One is the multiscale feature encoding module, which aggregates multiscale road features to improve the recognition ability of slender roads. The other is the long-range context-aware module, which consists of the channel attention module and the strip pooling module, and is used to obtain sufficient long-range context information from the channel dimension and spatial dimension to alleviate road occlusion. Experimental results on the open DeepGlobe road dataset and Massachusetts road dataset indicate that the proposed MECA-Net outperforms the other eight mainstream networks, which verifies the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, W. "DERIVATION OF TREE CANOPY COVER BY MULTISCALE REMOTE SENSING APPROACH." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVIII-4/W25 (August 31, 2012): 142–49. http://dx.doi.org/10.5194/isprsarchives-xxxviii-4-w25-142-2011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Weicheng, Waleed M. Al-Shafie, Ahmad S. Mhaimeed, Feras Ziadat, Vinay Nangia, and William Bill Payne. "Soil Salinity Mapping by Multiscale Remote Sensing in Mesopotamia, Iraq." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 7, no. 11 (November 2014): 4442–52. http://dx.doi.org/10.1109/jstars.2014.2360411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Xiaoqin, Zhiheng Xiao, Dongyang Li, Mingyu Fan, and Li Zhao. "Semantic Segmentation of Remote Sensing Images Using Multiscale Decoding Network." IEEE Geoscience and Remote Sensing Letters 16, no. 9 (September 2019): 1492–96. http://dx.doi.org/10.1109/lgrs.2019.2901592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zaouali, Mariem, Sonia Bouzidi, and Ezzeddine Zagrouba. "Review of multiscale geometric decompositions in a remote sensing context." Journal of Electronic Imaging 25, no. 6 (December 2, 2016): 061617. http://dx.doi.org/10.1117/1.jei.25.6.061617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Yuanyuan, Siling Feng, Cong Lin, Haijie Zhou, and Mengxing Huang. "A Three Stages Detail Injection Network for Remote Sensing Images Pansharpening." Remote Sensing 14, no. 5 (February 22, 2022): 1077. http://dx.doi.org/10.3390/rs14051077.

Full text
Abstract:
Multispectral (MS) pansharpening is crucial to improve the spatial resolution of MS images. MS pansharpening has the potential to provide images with high spatial and spectral resolutions. Pansharpening technique based on deep learning is a topical issue to deal with the distortion of spatio-spectral information. To improve the preservation of spatio-spectral information, we propose a novel three-stage detail injection pansharpening network (TDPNet) for remote sensing images. First, we put forward a dual-branch multiscale feature extraction block, which extracts four scale details of panchromatic (PAN) images and the difference between duplicated PAN and MS images. Next, cascade cross-scale fusion (CCSF) employs fine-scale fusion information as prior knowledge for the coarse-scale fusion to compensate for the lost information during downsampling and retain high-frequency details. CCSF combines the fine-scale and coarse-scale fusion based on residual learning and prior information of four scales. Last, we design a multiscale detail compensation mechanism and a multiscale skip connection block to reconstruct injecting details, which strengthen spatial details and reduce parameters. Abundant experiments implemented on three satellite data sets at degraded and full resolutions confirm that TDPNet trades off the spectral information and spatial details and improves the fidelity of sharper MS images. Both the quantitative and subjective evaluation results indicate that TDPNet outperforms the compared state-of-the-art approaches in generating MS images with high spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Linyi, Tingbao Xu, and Yun Chen. "Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features." Computational Intelligence and Neuroscience 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/9858531.

Full text
Abstract:
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Jing, Shaobo Zhang, Hui Wang, Yunsong Li, and Ruitao Lu. "Image Compression Network Structure Based on Multiscale Region of Interest Attention Network." Remote Sensing 15, no. 2 (January 16, 2023): 522. http://dx.doi.org/10.3390/rs15020522.

Full text
Abstract:
In this study, we proposed a region of interest (ROI) compression algorithm under the deep learning self-encoder framework to improve the reconstruction performance of the image and reduce the distortion of the ROI. First, we adopted a remote sensing image cloud detection algorithm for detecting important targets in images, that is, separating the remote sensing background from important regions in remote sensing images and then determining the target regions because most traditional ROI-based image compression algorithms utilize the manual labeling of the ROI to achieve region separation in images. We designed a multiscale ROI self-coding network from coarse to fine with a hierarchical super priority layer to synthesize images to reduce the spatial redundancy more effectively, thus greatly improving the distortion rate performance of image compression. By using a spatial attention mechanism for the ROI in the image compression network, we achieved better compression performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Lyu, Xin, Yiwei Fang, Baogen Tong, Xin Li, and Tao Zeng. "Multiscale Normalization Attention Network for Water Body Extraction from Remote Sensing Imagery." Remote Sensing 14, no. 19 (October 7, 2022): 4983. http://dx.doi.org/10.3390/rs14194983.

Full text
Abstract:
Extracting water bodies is an important task in remote sensing imagery (RSI) interpretation. Deep convolution neural networks (DCNNs) show great potential in feature learning; they are widely used in the water body interpretation of RSI. However, the accuracy of DCNNs is still unsatisfactory due to differences in the many hetero-features of water bodies, such as spectrum, geometry, and spatial size. To address the problem mentioned above, this paper proposes a multiscale normalization attention network (MSNANet) which can accurately extract water bodies in complicated scenarios. First of all, a multiscale normalization attention (MSNA) module was designed to merge multiscale water body features and highlight feature representation. Then, an optimized atrous spatial pyramid pooling (OASPP) module was developed to refine the representation by leveraging context information, which improves segmentation performance. Furthermore, a head module (FEH) for feature enhancing was devised to realize high-level feature enhancement and reduce training time. The extensive experiments were carried out on two benchmarks: the Surface Water dataset and the Qinghai–Tibet Plateau Lake dataset. The results indicate that the proposed model outperforms current mainstream models on OA (overall accuracy), f1-score, kappa, and MIoU (mean intersection over union). Moreover, the effectiveness of the proposed modules was proven to be favorable through ablation study.
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, Min, Cong Cheng, and Gennaro De Luca. "Remote Sensing Data Detection Based on Multiscale Fusion and Attention Mechanism." Mobile Information Systems 2021 (November 26, 2021): 1–12. http://dx.doi.org/10.1155/2021/6466051.

Full text
Abstract:
Remote sensing images are often of low quality due to the limitations of the equipment, resulting in poor image accuracy, and it is extremely difficult to identify the target object when it is blurred or small. The main challenge is that objects in sensing images have very few pixels. Traditional convolutional networks are complicated to extract enough information through local convolution and are easily disturbed by noise points, so they are usually not ideal for classifying and diagnosing small targets. The current solution is to process the feature map information at multiple scales, but this method does not consider the supplementary effect of the context information of the feature map on the semantics. In this work, in order to enable CNNs to make full use of context information and improve its representation ability, we propose a residual attention function fusion method, which improves the representation ability of feature maps by fusing contextual feature map information of different scales, and then propose a spatial attention mechanism for global pixel point convolution response. This method compresses global pixels through convolution, weights the original feature map pixels, reduces noise interference, and improves the network’s ability to grasp global critical pixel information. In experiments, the remote sensing ship image recognition experiments on remote sensing image data sets show that the network structure can improve the performance of small-target detection. The results on cifar10 and cifar100 prove that the attention mechanism is universal and practical.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Cheng, and Dan He. "A Deep Multiscale Fusion Method via Low-Rank Sparse Decomposition for Object Saliency Detection Based on Urban Data in Optical Remote Sensing Images." Wireless Communications and Mobile Computing 2020 (May 8, 2020): 1–14. http://dx.doi.org/10.1155/2020/7917021.

Full text
Abstract:
The urban data provides a wealth of information that can support the life and work for people. In this work, we research the object saliency detection in optical remote sensing images, which is conducive to the interpretation of urban scenes. Saliency detection selects the regions with important information in the remote sensing images, which severely imitates the human visual system. It plays a powerful role in other image processing. It has successfully made great achievements in change detection, object tracking, temperature reversal, and other tasks. The traditional method has some disadvantages such as poor robustness and high computational complexity. Therefore, this paper proposes a deep multiscale fusion method via low-rank sparse decomposition for object saliency detection in optical remote sensing images. First, we execute multiscale segmentation for remote sensing images. Then, we calculate the saliency value, and the proposal region is generated. The superpixel blocks of the remaining proposal regions of the segmentation map are input into the convolutional neural network. By extracting the depth feature, the saliency value is calculated and the proposal regions are updated. The feature transformation matrix is obtained based on the gradient descent method, and the high-level semantic prior knowledge is obtained by using the convolutional neural network. The process is iterated continuously to obtain the saliency map at each scale. The low-rank sparse decomposition of the transformed matrix is carried out by robust principal component analysis. Finally, the weight cellular automata method is utilized to fuse the multiscale saliency graphs and the saliency map calculated according to the sparse noise obtained by decomposition. Meanwhile, the object priors knowledge can filter most of the background information, reduce unnecessary depth feature extraction, and meaningfully improve the saliency detection rate. The experiment results show that the proposed method can effectively improve the detection effect compared to other deep learning methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhen, Longxia, and Wei Liang. "Planning and Design Method of Multiangle Ecological Building Edge Space under the Background of Rural Revitalization." Mathematical Problems in Engineering 2022 (September 16, 2022): 1–9. http://dx.doi.org/10.1155/2022/2848164.

Full text
Abstract:
Under the background of rural revitalization, in order to realize the planning and design of ecological building edge space, a multi-perspective ecological building edge space planning and design method based on remote sensing image edge segmentation is proposed. The remote sensing visual detection of ecological buildings is realized by fusing multiscale features and multisource scene remote sensing images, and the extracted remote sensing image feature points are calibrated to extract the location information, texture features, super-resolution edge information features, and different levels of change features of the spatial distribution of the edge of ecological buildings. The background difference detection model of an ecological building remote sensing image is established, and the distance of the centroid of the corresponding level is calculated through frame dynamic planning and differential image clustering. Combined with the edge contour detection method of ecological building remote sensing image, the edge space planning and design are realized. The simulation results show that this method has higher accuracy in planning and better accuracy in detecting the contour of ecological building edge space and improves the dynamic planning and positioning ability of multi-perspective ecological building edge space distribution.
APA, Harvard, Vancouver, ISO, and other styles
30

YANG Zhou, 杨. 州., 慕晓冬 MU Xiao-dong, 王舒洋 WANG Shu-yang, and 马晨晖 MA Chen-hui. "Scene classification of remote sensing images based on multiscale features fusion." Optics and Precision Engineering 26, no. 12 (2018): 3099–107. http://dx.doi.org/10.3788/ope.20182612.3099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Yao Qunli, 姚群力, 胡显 Hu Xian, and 雷宏 Lei Hong. "Object Detection in Remote Sensing Images Using Multiscale Convolutional Neural Networks." Acta Optica Sinica 39, no. 11 (2019): 1128002. http://dx.doi.org/10.3788/aos201939.1128002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tang, T., T. Chen, B. Zhu, and Y. Ye. "MU-NET: A MULTISCALE UNSUPERVISED NETWORK FOR REMOTE SENSING IMAGE REGISTRATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (May 30, 2022): 537–44. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-537-2022.

Full text
Abstract:
Abstract. Registration for multi-sensor or multi-modal image pairs with a large degree of distortions is a fundamental task for many remote sensing applications. To achieve accurate and low-cost remote sensing image registration, we propose a multiscale unsupervised network (MU-Net). Without costly ground truth labels, MU-Net directly learns the end-to-end mapping from the image pairs to their transformation parameters. MU-Net performs a coarse-to-fine registration pipeline by stacking several deep neural network models on multiple scales, which prevents the backpropagation being falling into a local extremum and resists significant image distortions. In addition, a novel loss function paradigm is designed based on structural similarity, which makes MU-Net suitable for various types of multi-modal images. MU-Net is compared with traditional feature-based and area-based methods, as well as supervised and other unsupervised learning methods on the Optical-Optical, Optical-Infrared, Optical-SAR and Optical-Map datasets. Experimental results show that MU-Net achieves more robust and accurate registration performance between these image pairs with geometric and radiometric distortions.We share the datasets and the code implemented by Pytorch at https://github.com/yeyuanxin110/MU-Net.
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Rui, Fengcai Zheng, and Wei Huang. "Multilabel Remote Sensing Image Annotation With Multiscale Attention and Label Correlation." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14 (2021): 6951–61. http://dx.doi.org/10.1109/jstars.2021.3091134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Shaohui, and Hongbo Su. "An Eigenpoint Based Multiscale Method for Validating Quantitative Remote Sensing Products." Advances in Meteorology 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/692313.

Full text
Abstract:
This letter first proposes the eigenpoint concept for quantitative remote sensing products (QRSPs) after discussing the eigenhomogeneity and eigenaccuracy for land surface variables. The eigenpoints are located according to the á trous wavelet planes of the QRSP. Based on these concepts, this letter proposes an eigenpoint based multiscale method for validating the QRSPs. The basic idea is that the QRSPs at coarse scales are validated by validating their eigenpoints using the QRSP at fine scale. The QRSP at fine scale is finally validated using observation data at the ground based eigenpoints at instrument scale. The ground based eigenpoints derived from the forecasted QRSP can be used as the observation positions when the satellites pass by the studied area. Experimental results demonstrate that the proposed method is manpower-and time-saving compared with the ideal scanning method and it is satisfying to perform simultaneous observation at these eigenpoints in terms of efficiency and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
35

Garzelli, Andrea, and Filippo Nencini. "Panchromatic sharpening of remote sensing images using a multiscale Kalman filter." Pattern Recognition 40, no. 12 (December 2007): 3568–77. http://dx.doi.org/10.1016/j.patcog.2007.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Corbane, C., D. Raclot, F. Jacob, J. Albergel, and P. Andrieux. "Remote sensing of soil surface characteristics from a multiscale classification approach." CATENA 75, no. 3 (November 2008): 308–18. http://dx.doi.org/10.1016/j.catena.2008.07.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Ge, Lingling Li, Hao Zhu, Xu Liu, and Licheng Jiao. "Adaptive Multiscale Deep Fusion Residual Network for Remote Sensing Image Classification." IEEE Transactions on Geoscience and Remote Sensing 57, no. 11 (November 2019): 8506–21. http://dx.doi.org/10.1109/tgrs.2019.2921342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Xueliang, Pengfeng Xiao, Xuezhi Feng, Li Feng, and Nan Ye. "Toward Evaluating Multiscale Segmentations of High Spatial Resolution Remote Sensing Images." IEEE Transactions on Geoscience and Remote Sensing 53, no. 7 (July 2015): 3694–706. http://dx.doi.org/10.1109/tgrs.2014.2381632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wu, Weicheng, Eddy De Pauw, and Ulf Helldén. "Assessing woody biomass in African tropical savannahs by multiscale remote sensing." International Journal of Remote Sensing 34, no. 13 (March 25, 2013): 4525–49. http://dx.doi.org/10.1080/01431161.2013.777487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Vannier, Clémence, and Laurence Hubert-Moy. "Multiscale comparison of remote-sensing data for linear woody vegetation mapping." International Journal of Remote Sensing 35, no. 21 (November 2, 2014): 7376–99. http://dx.doi.org/10.1080/01431161.2014.968683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Xu, Jindong, Mengying Ni, Yanjie Zhang, Xiangrong Tong, Qiang Zheng, and Jinglei Liu. "Remote sensing image fusion method based on multiscale morphological component analysis." Journal of Applied Remote Sensing 10, no. 2 (June 9, 2016): 025018. http://dx.doi.org/10.1117/1.jrs.10.025018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lu, Xiaoyan, Yanfei Zhong, Zhuo Zheng, Ji Zhao, and Liangpei Zhang. "Edge-Reinforced Convolutional Neural Network for Road Detection in Very-High-Resolution Remote Sensing Imagery." Photogrammetric Engineering & Remote Sensing 86, no. 3 (March 1, 2020): 153–60. http://dx.doi.org/10.14358/pers.86.3.153.

Full text
Abstract:
Road detection in very-high-resolution remote sensing imagery is a hot research topic. However, the high resolution results in highly complex data distributions, which lead to much noise for road detection—for example, shadows and occlusions caused by disturbance on the roadside make it difficult to accurately recognize road. In this article, a novel edge-reinforced convolutional neural network, combined with multiscale feature extraction and edge reinforcement, is proposed to alleviate this problem. First, multiscale feature extraction is used in the center part of the proposed network to extract multiscale context information. Then edge reinforcement, applying a simplified U-Net to learn additional edge information, is used to restore the road information. The two operations can be used with different convolutional neural networks. Finally, two public road data sets are adopted to verify the effectiveness of the proposed approach, with experimental results demonstrating its superiority.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Weisheng, Xuesong Liang, and Meilin Dong. "MDECNN: A Multiscale Perception Dense Encoding Convolutional Neural Network for Multispectral Pan-Sharpening." Remote Sensing 13, no. 3 (February 2, 2021): 535. http://dx.doi.org/10.3390/rs13030535.

Full text
Abstract:
With the rapid development of deep neural networks in the field of remote sensing image fusion, the pan-sharpening method based on convolutional neural networks has achieved remarkable effects. However, because remote sensing images contain complex features, existing methods cannot fully extract spatial features while maintaining spectral quality, resulting in insufficient reconstruction capabilities. To produce high-quality pan-sharpened images, a multiscale perception dense coding convolutional neural network (MDECNN) is proposed. The network is based on dual-stream input, designing multiscale blocks to separately extract the rich spatial information contained in panchromatic (PAN) images, designing feature enhancement blocks and dense coding structures to fully learn the feature mapping relationship, and proposing comprehensive loss constraint expectations. Spectral mapping is used to maintain spectral quality and obtain high-quality fused images. Experiments on different satellite datasets show that this method is superior to the existing methods in both subjective and objective evaluations.
APA, Harvard, Vancouver, ISO, and other styles
44

Tian, Shu, Lin Cao, Lihong Kang, Xiangwei Xing, Jing Tian, Kangning Du, Ke Sun, Chunzhuo Fan, Yuzhe Fu, and Ye Zhang. "A Novel Hybrid Attention-Driven Multistream Hierarchical Graph Embedding Network for Remote Sensing Object Detection." Remote Sensing 14, no. 19 (October 4, 2022): 4951. http://dx.doi.org/10.3390/rs14194951.

Full text
Abstract:
Multiclass geospatial object detection in high-spatial-resolution remote-sensing images (HSRIs) has recently attracted considerable attention in many remote-sensing applications as a fundamental task. However, the complexity and uncertainty of spatial distribution among multiclass geospatial objects are still huge challenges for object detection in HSRIs. Most current remote-sensing object-detection approaches fall back on deep convolutional neural networks (CNNs). Nevertheless, most existing methods only focus on mining visual characteristics and lose sight of spatial or semantic relation discriminations, eventually degrading object-detection performance in HSRIs. To tackle these challenges, we propose a novel hybrid attention-driven multistream hierarchical graph embedding network (HA-MHGEN) to explore complementary spatial and semantic patterns for improving remote-sensing object-detection performance. Specifically, we first constructed hierarchical spatial graphs for multiscale spatial relation representation. Then, semantic graphs were also constructed by integrating them with the word embedding of object category labels on graph nodes. Afterwards, we developed a self-attention-aware multiscale graph convolutional network (GCN) to derive stronger for intra- and interobject hierarchical spatial relations and contextual semantic relations, respectively. These two relation networks were followed by a novel cross-attention-driven spatial- and semantic-feature fusion module that utilizes a multihead attention mechanism to learn associations between diverse spatial and semantic correlations, and guide them to endowing a more powerful discrimination ability. With the collaborative learning of the three relation networks, the proposed HA-MHGEN enables grasping explicit and implicit relations from spatial and semantic patterns, and boosts multiclass object-detection performance in HRSIs. Comprehensive and extensive experimental evaluation results on three benchmarks, namely, DOTA, DIOR, and NWPU VHR-10, demonstrate the effectiveness and superiority of our proposed method compared with that of other advanced remote-sensing object-detection methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Guan, XianMing, Di Wang, Luhe Wan, and Jiyi Zhang. "Extracting Wetland Type Information with a Deep Convolutional Neural Network." Computational Intelligence and Neuroscience 2022 (May 18, 2022): 1–11. http://dx.doi.org/10.1155/2022/5303872.

Full text
Abstract:
Wetlands have important ecological value. The application of wetland remote sensing is essential for the timely and accurate analysis of the current situation in wetlands and dynamic changes in wetland resources, but high-resolution remote sensing images display nonobvious boundaries between wetland types. However, high classification accuracy and time efficiency cannot be guaranteed simultaneously. Extraction of wetland type information based on high-spatial-resolution remote sensing images is a bottleneck that has hindered wetland development research and change detection. This paper proposes an automatic and efficient method for extracting wetland type information. First, the object-oriented multiscale segmentation method is used to realize the fine segmentation of high-resolution remote sensing images, and then the deep convolutional neural network model AlexNet is used to classify automatically the types of wetland images. The method is verified in a case study involving field-measured data, and the classification results are compared with those of traditional classification methods. The results show that the proposed method can more accurately and efficiently extract different wetland types in high-resolution remote sensing images than the traditional classification methods. The proposed method will be helpful in the extension and application of wetland remote sensing technology and will provide technical support for the protection, development, and utilization of wetland resources.
APA, Harvard, Vancouver, ISO, and other styles
46

Guo, Mingqiang, Zhongyang Yu, Yongyang Xu, Ying Huang, and Chunfeng Li. "ME-Net: A Deep Convolutional Neural Network for Extracting Mangrove Using Sentinel-2A Data." Remote Sensing 13, no. 7 (March 29, 2021): 1292. http://dx.doi.org/10.3390/rs13071292.

Full text
Abstract:
Mangroves play an important role in many aspects of ecosystem services. Mangroves should be accurately extracted from remote sensing imagery to dynamically map and monitor the mangrove distribution area. However, popular mangrove extraction methods, such as the object-oriented method, still have some defects for remote sensing imagery, such as being low-intelligence, time-consuming, and laborious. A pixel classification model inspired by deep learning technology was proposed to solve these problems. Three modules in the proposed model were designed to improve the model performance. A multiscale context embedding module was designed to extract multiscale context information. Location information was restored by the global attention module, and the boundary of the feature map was optimized by the boundary fitting unit. Remote sensing imagery and mangrove distribution ground truth labels obtained through visual interpretation were applied to build the dataset. Then, the dataset was used to train deep convolutional neural network (CNN) for extracting the mangrove. Finally, comparative experiments were conducted to prove the potential for mangrove extraction. We selected the Sentinel-2A remote sensing data acquired on 13 April 2018 in Hainan Dongzhaigang National Nature Reserve in China to conduct a group of experiments. After processing, the data exhibited 2093 × 2214 pixels, and a mangrove extraction dataset was generated. The dataset was made from Sentinel-2A satellite, which includes five original bands, namely R, G, B, NIR, and SWIR-1, and six multispectral indices, namely normalization difference vegetation index (NDVI), modified normalized difference water index (MNDWI), forest discrimination index (FDI), wetland forest index (WFI), mangrove discrimination index (MDI), and the first principal component (PCA1). The dataset has a total of 6400 images. Experimental results based on datasets show that the overall accuracy of the trained mangrove extraction network reaches 97.48%. Our method benefits from CNN and achieves a more accurate intersection and union ratio than other machine learning and pixel classification methods by analysis. The designed model global attention module, multiscale context embedding, and boundary fitting unit are helpful for mangrove extraction.
APA, Harvard, Vancouver, ISO, and other styles
47

Fan, Xiangsuo, Chuan Yan, Jinlong Fan, and Nayi Wang. "Improved U-Net Remote Sensing Classification Algorithm Fusing Attention and Multiscale Features." Remote Sensing 14, no. 15 (July 27, 2022): 3591. http://dx.doi.org/10.3390/rs14153591.

Full text
Abstract:
The selection and representation of classification features in remote sensing image play crucial roles in image classification accuracy. To effectively improve the features classification accuracy, an improved U-Net remote sensing classification algorithm fusing attention and multiscale features is proposed in this paper, called spatial attention-atrous spatial pyramid pooling U-Net (SA-UNet). This framework connects atrous spatial pyramid pooling (ASPP) with the convolutional units of the encoder of the original U-Net in the form of residuals. The ASPP module expands the receptive field, integrates multiscale features in the network, and enhances the ability to express shallow features. Through the fusion residual module, shallow and deep features are deeply fused, and the characteristics of shallow and deep features are further used. The spatial attention mechanism is used to combine spatial with semantic information so that the decoder can recover more spatial information. In this study, the crop distribution in central Guangxi province was analyzed, and experiments were conducted based on Landsat 8 multispectral remote sensing images. The experimental results showed that the improved algorithm increases the classification accuracy, with the accuracy increasing from 93.33% to 96.25%, The segmentation accuracy of sugarcane, rice, and other land increased from 96.42%, 63.37%, and 88.43% to 98.01%, 83.21%, and 95.71%, respectively. The agricultural planting area results obtained by the proposed algorithm can be used as input data for regional ecological models, which is conducive to the development of accurate and real-time crop growth change models.
APA, Harvard, Vancouver, ISO, and other styles
48

Gao, Tong, Hao Chen, and Wen Chen. "MCMS-STM: An Extension of Support Tensor Machine for Multiclass Multiscale Object Recognition in Remote Sensing Images." Remote Sensing 14, no. 1 (January 2, 2022): 196. http://dx.doi.org/10.3390/rs14010196.

Full text
Abstract:
The support tensor machine (STM) extended from support vector machine (SVM) can maintain the inherent information of remote sensing image (RSI) represented as tensor and obtain effective recognition results using a few training samples. However, the conventional STM is binary and fails to handle multiclass classification directly. In addition, the existing STMs cannot process objects with different sizes represented as multiscale tensors and have to resize object slices to a fixed size, causing excessive background interferences or loss of object’s scale information. Therefore, the multiclass multiscale support tensor machine (MCMS-STM) is proposed to recognize effectively multiclass objects with different sizes in RSIs. To achieve multiclass classification, by embedding one-versus-rest and one-versus-one mechanisms, multiple hyperplanes described by rank-R tensors are built simultaneously instead of single hyperplane described by rank-1 tensor in STM to separate input with different classes. To handle multiscale objects, multiple slices of different sizes are extracted to cover the object with an unknown class and expressed as multiscale tensors. Then, M-dimensional hyperplanes are established to project the input of multiscale tensors into class space. To ensure an efficient training of MCMS-STM, a decomposition algorithm is presented to break the complex dual problem of MCMS-STM into a series of analytic sub-optimizations. Using publicly available RSIs, the experimental results demonstrate that the MCMS-STM achieves 89.5% and 91.4% accuracy for classifying airplanes and ships with different classes and sizes, which outperforms typical SVM and STM methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Rui, Xinxin Zhang, Yuchao Zheng, Dahan Wang, and Lizhong Hua. "MSCNet: A Multilevel Stacked Context Network for Oriented Object Detection in Optical Remote Sensing Images." Remote Sensing 14, no. 20 (October 11, 2022): 5066. http://dx.doi.org/10.3390/rs14205066.

Full text
Abstract:
Oriented object detection has recently become a hot research topic in remote sensing because it provides a better spatial expression of oriented target objects. Although research has made considerable progress in this field, the feature of multiscale and arbitrary directions still poses great challenges for oriented object detection tasks. In this paper, a multilevel stacked context network (MSCNet) is proposed to enhance target detection accuracy by aggregating the semantic relationships between different objects and contexts in remote sensing images. Additionally, to alleviate the impact of the defects of the traditional oriented bounding box representation, the feasibility of using a Gaussian distribution instead of the traditional representation is discussed in this paper. Finally, we verified the performance of our work on two common remote sensing datasets, and the results show that our proposed network improved on the baseline.
APA, Harvard, Vancouver, ISO, and other styles
50

Xiao, Pengfeng, Xueliang Zhang, Hongmin Zhang, Rui Hu, and Xuezhi Feng. "Multiscale Optimized Segmentation of Urban Green Cover in High Resolution Remote Sensing Image." Remote Sensing 10, no. 11 (November 15, 2018): 1813. http://dx.doi.org/10.3390/rs10111813.

Full text
Abstract:
The urban green cover in high-spatial resolution (HR) remote sensing images have obvious multiscale characteristics, it is thus not possible to properly segment all features using a single segmentation scale because over-segmentation or under-segmentation often occurs. In this study, an unsupervised cross-scale optimization method specifically for urban green cover segmentation is proposed. A global optimal segmentation is first selected from multiscale segmentation results by using an optimization indicator. The regions in the global optimal segmentation are then isolated into under- and fine-segmentation parts. The under-segmentation regions are further locally refined by using the same indicator as that in global optimization. Finally, the fine-segmentation part and the refined under-segmentation part are combined to obtain the final cross-scale optimized result. The green cover objects can be segmented at their specific optimal segmentation scales in the optimized segmentation result to reduce both under- and over-segmentation errors. Experimental results on two test HR datasets verify the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography