Journal articles on the topic 'Remote sensing Image processing Remote sensing Remote sensing Computer algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Remote sensing Image processing Remote sensing Remote sensing Computer algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Hongchao, and Fang Wu. "Conversion and Visualization of Remote Sensing Image Data in CAD." Computer-Aided Design and Applications 18, S3 (October 20, 2020): 82–94. http://dx.doi.org/10.14733/cadaps.2021.s3.82-94.

Full text
Abstract:
In this paper, a process visualization model for remote sensing image classification algorithms is constructed to analyze the current processing characteristics of process visualization in remote sensing application systems. The usability of the model is verified in a remote sensing application system with a remote sensing image classification algorithm based on support vector machines as an example. Given the characteristics of remote sensing applications that require high visualization process and a large amount of data processing, the basic process of an image classification algorithm for remote sensing applications is summarized by analyzing the basic process of existing image classification algorithms in remote sensing applications, taking into account the characteristics of process visualization. Based on the existing process of remote sensing image classification algorithm, a process visualization model is proposed. The model takes a goal-based process acts as the basic elements of the model, provides visualization functions and interfaces for human-computer interaction through a human-computer interaction selector, and uses a template knowledge base to save processing data and realize the description of customized processes. The model has little impact on the efficiency and accuracy of the support vector machine-based remote sensing image classification algorithm during the process of process visualization and customization. Finally, the application of the model to integrate business processing of earth observation can address the problem of process customization visualization for remote sensing applications to some extent.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Zhiqin, Yaqin Luo, Hongyan Wei, Yong Li, Guanqiu Qi, Neal Mazur, Yuanyuan Li, and Penglong Li. "Atmospheric Light Estimation Based Remote Sensing Image Dehazing." Remote Sensing 13, no. 13 (June 22, 2021): 2432. http://dx.doi.org/10.3390/rs13132432.

Full text
Abstract:
Remote sensing images are widely used in object detection and tracking, military security, and other computer vision tasks. However, remote sensing images are often degraded by suspended aerosol in the air, especially under poor weather conditions, such as fog, haze, and mist. The quality of remote sensing images directly affect the normal operations of computer vision systems. As such, haze removal is a crucial and indispensable pre-processing step in remote sensing image processing. Additionally, most of the existing image dehazing methods are not applicable to all scenes, so the corresponding dehazed images may have varying degrees of color distortion. This paper proposes a novel atmospheric light estimation based dehazing algorithm to obtain high visual-quality remote sensing images. First, a differentiable function is used to train the parameters of a linear scene depth model for the scene depth map generation of remote sensing images. Second, the atmospheric light of each hazy remote sensing image is estimated by the corresponding scene depth map. Then, the corresponding transmission map is estimated on the basis of the estimated atmospheric light by a haze-lines model. Finally, according to the estimated atmospheric light and transmission map, an atmospheric scattering model is applied to remove haze from remote sensing images. The colors of the images dehazed by the proposed method are in line with the perception of human eyes in different scenes. A dataset with 100 remote sensing images from hazy scenes was built for testing. The performance of the proposed image dehazing method is confirmed by theoretical analysis and comparative experiments.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, J., J. Sheng, Y. Chen, L. Ke, N. Yao, Z. Miao, X. Zeng, L. Hu, and Q. Wang. "A WEB-BASED LEARNING ENVIRONMENT OF REMOTE SENSING EXPERIMENTAL CLASS WITH PYTHON." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B5-2020 (August 24, 2020): 57–61. http://dx.doi.org/10.5194/isprs-archives-xliii-b5-2020-57-2020.

Full text
Abstract:
Abstract. Remote sensing course is a general disciplinary required course of human geography and urban-rural planning major. Its class hour is 48, including theoretical classes and experimental classes. Rapid technological developments is remote sensing area demand quick and steady changes in the education programme and its realization, especially in experimental classes. Experimental classes include: introduction to remote sensing software and basic operations, remote sensing data pre-processing (input, output, 2D and 3D terrain display, image cut, image mosaic, and projection transformation), remote sensing image enhancement, remote sensing image transformation, computer aided classification, image interpretation, and remote sensing image terrain analysis. There are two difficulties in the remote sensing experimental classes. First, it cost a lot of time to prepare the remote sensing software and the remote sensing images. Second, some students just want to use the remote sensing as a tool to investigate environment changing, some other students may want to study more remote sensing image processing technologies. A web-based learning environment of remote sensing is developed to facilitate the application of remote sensing experimental teaching. To make the learning more effective, there are eight modules including four optional modules. The Python programming language is chosen to implement the web-based remote sensing learning environment. The web-based learning environment is implemented in a local network server, including the remote sensing data processing algorithms and many satellite image data. Students can easily exercise the remote sensing experimental courses by connecting to the local network server. It is developed mainly for remote sensing experimental course, and also can be adopted by digital image processing or other courses. The feature of web-based learning may be very useful as the online education adopted because of Corona Virus Disease 2019. The results are encouraging and some recommendations will be extracted for the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Liang, and Guodong Liu. "An Approach on Image Processing of Deep Learning Based on Improved SSD." Symmetry 13, no. 3 (March 17, 2021): 495. http://dx.doi.org/10.3390/sym13030495.

Full text
Abstract:
Compared with ordinary images, each of the remote sensing images contains many kinds of objects with large scale changes, providing more details. As a typical object of remote sensing image, ship detection has been playing an essential role in the field of remote sensing. With the rapid development of deep learning, remote sensing image detection method based on convolutional neural network (CNN) has occupied a key position. In remote sensing images, the objects of which small scale objects account for a large proportion are closely arranged. In addition, the convolution layer in CNN lacks ample context information, leading to low detection accuracy for remote sensing image detection. To improve detection accuracy and keep the speed of real-time detection, this paper proposed an efficient object detection algorithm for ship detection of remote sensing image based on improved SSD. Firstly, we add a feature fusion module to shallow feature layers to refine feature extraction ability of small object. Then, we add Squeeze-and-Excitation Network (SE) module to each feature layers, introducing attention mechanism to network. The experimental results based on Synthetic Aperture Radar ship detection dataset (SSDD) show that the mAP reaches 94.41%, and the average detection speed is 31FPS. Compared with SSD and other representative object detection algorithms, this improved algorithm has a better performance in detection accuracy and can realize real-time detection.
APA, Harvard, Vancouver, ISO, and other styles
5

Tripathi, Rakesh, and Neelesh Gupta. "A Review on Segmentation Techniques in Large-Scale Remote Sensing Images." SMART MOVES JOURNAL IJOSCIENCE 4, no. 4 (April 20, 2018): 7. http://dx.doi.org/10.24113/ijoscience.v4i4.143.

Full text
Abstract:
Information extraction is a very challenging task because remote sensing images are very complicated and can be influenced by many factors. The information we can derive from a remote sensing image mostly depends on the image segmentation results. Image segmentation is an important processing step in most image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation. Labeling different parts of the image has been a challenging aspect of image processing. Segmentation is considered as one of the main steps in image processing. It divides a digital image into multiple regions in order to analyze them. It is also used to distinguish different objects in the image. Several image segmentation techniques have been developed by the researchers in order to make images smooth and easy to evaluate. Various algorithms for automating the segmentation process have been proposed, tested and evaluated to find the most ideal algorithm to be used for different types of images. In this paper a review of basic image segmentation techniques of satellite images is presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Zou, Quan, Guoqing Li, and Wenyang Yu. "Cloud Computing Based on Computational Characteristics for Disaster Monitoring." Applied Sciences 10, no. 19 (September 24, 2020): 6676. http://dx.doi.org/10.3390/app10196676.

Full text
Abstract:
Resources related to remote-sensing data, computing, and models are scattered globally. The use of remote-sensing images for disaster-monitoring applications is data-intensive and involves complex algorithms. These characteristics make the timely and rapid processing of disaster-monitoring applications challenging and inefficient. Cloud computing provides a dynamically scalable resource over the Internet. The rapid development of cloud computing has led to an increase in the computational performance of data-intensive computing, providing powerful throughput by distributing computation across many distributed computers. However, the use of current cloud computing models in scientific applications using remote-sensing image data has been limited to a single image-processing algorithm rather than a well-established model and method. This poses problems for the development of complex disaster-monitoring applications on cloud platform architectures. For example, distributed computing strategies and remote-sensing image-processing algorithms are highly coupled and not reusable. The aims of this paper are to identify computational characteristics of various disaster-monitoring algorithms and classify them according to different computational characteristics; explore a reusable processing model based on the MapReduce programming model for disaster-monitoring applications; and then establish a programming model for each type of algorithm. This approach provides a simpler programming method for programmers to implement disaster-monitoring applications. Finally, some examples are given to explain the proposed method and test its performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Ying, and Ye Cai Guo. "Remote Sensing Image Enhancement Based on Wavelet Transformation." Applied Mechanics and Materials 198-199 (September 2012): 223–26. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.223.

Full text
Abstract:
The contrast of remote sensing images is very low, which include various noises. In order to make full used of remote sensing image information extraction and processing, the original image should have to be enhanced. In this paper the enhancement algorithm based on the biothogonal wavelet transform is proposed. Firstly, we have to eliminate the beforehand noise, and then take advantage of the non-linear wavelet transform to enhanced low-frequency and high- frequency coefficient respectively. Finally, the new picture is reconstruct by the transformed low-frequency and high-frequency coefficient. The efficiency of the proposed algorithm was proved by the theoretical analysis and computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, R. G., G. Qiao, Y. J. Wu, and Y. J. Cao. "EXTRACTION OF RIVERS AND LAKES ON TIBETAN PLATEAU BASED ON GOOGLE EARTH ENGINE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1797–801. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1797-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Tibetan Plateau (TP) is the most abundant area of water resources and water energy resources in China. It is also the birthplace of the main rivers in Southeast Asia and plays an important strategic role. However, due to its remote location and complex topography, the observation of surface hydrometeorological elements is extremely scarce, which seriously restricts the understanding of the water cycle in this area. Using remote sensing images to extract rivers and lakes on TP can obtain a lot of valuable water resources information. However, the downloading and processing of remote sensing images is very time-consuming, especially the processing of remote sensing images with large-scale and long time series often involves hundreds of gigabytes of data, which requires a high level of personal computers and is inefficient. As a cloud platform dedicated to data processing and analysis of geoscience, Google Earth Engine(GEE) integrates many excellent remote sensing image processing algorithms. It does not need to download images and supports online remote sensing image processing, which greatly improves the output efficiency. Based on GEE, the monthly data of Yarlung Zangbo River at Nuxia Hydrological Station and the annual data of typical lakes were extracted and vectorized from the pre-processed Landsat series images. It was found that the area of Yarlung Zangbo River at Nuxia Hydrological Station varies periodically. The changing trend of typical lakes is also revealed.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Sen, Xiaoming Sun, Pengfei Liu, Kaige Xu, Weifeng Zhang, and Chenxu Wu. "Research on Remote Sensing Image Matching with Special Texture Background." Symmetry 13, no. 8 (July 29, 2021): 1380. http://dx.doi.org/10.3390/sym13081380.

Full text
Abstract:
The purpose of image registration is to find the symmetry between the reference image and the image to be registered. In order to improve the registration effect of unmanned aerial vehicle (UAV) remote sensing imagery with a special texture background, this paper proposes an improved scale-invariant feature transform (SIFT) algorithm by combining image color and exposure information based on adaptive quantization strategy (AQCE-SIFT). By using the color and exposure information of the image, this method can enhance the contrast between the textures of the image with a special texture background, which allows easier feature extraction. The algorithm descriptor was constructed through an adaptive quantization strategy, so that remote sensing images with large geometric distortion or affine changes have a higher correct matching rate during registration. The experimental results showed that the AQCE-SIFT algorithm proposed in this paper was more reasonable in the distribution of the extracted feature points compared with the traditional SIFT algorithm. In the case of 0 degree, 30 degree, and 60 degree image geometric distortion, when the remote sensing image had a texture scarcity region, the number of matching points increased by 21.3%, 45.5%, and 28.6%, respectively and the correct matching rate increased by 0%, 6.0%, and 52.4%, respectively. When the remote sensing image had a large number of similar repetitive regions of texture, the number of matching points increased by 30.4%, 30.9%, and −11.1%, respectively and the correct matching rate increased by 1.2%, 0.8%, and 20.8% respectively. When processing remote sensing images with special texture backgrounds, the AQCE-SIFT algorithm also has more advantages than the existing common algorithms such as color SIFT (CSIFT), gradient location and orientation histogram (GLOH), and speeded-up robust features (SURF) in searching for the symmetry of features between images.
APA, Harvard, Vancouver, ISO, and other styles
10

Kaur, Sumit. "Deep Learning Based High-Resolution Remote Sensing Image classification." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 10 (October 30, 2017): 22. http://dx.doi.org/10.23956/ijarcsse.v7i10.384.

Full text
Abstract:
Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Liming, Haoxin Yan, Yingzi Shan, Chang Zheng, Yang Liu, Xianyu Zuo, and Baojun Qiao. "Aircraft Detection for Remote Sensing Images Based on Deep Convolutional Neural Networks." Journal of Electrical and Computer Engineering 2021 (August 11, 2021): 1–16. http://dx.doi.org/10.1155/2021/4685644.

Full text
Abstract:
Aircraft detection for remote sensing images, as one of the fields of computer vision, is one of the significant tasks of image processing based on deep learning. Recently, many high-performance algorithms for aircraft detection have been developed and applied in different scenarios. However, the proposed algorithms still have a series of problems; for instance, the algorithms will miss some small-scale aircrafts when applied to the remote sensing image. There are two main reasons for the problem; one reason is that the aircrafts in the remote sensing image are usually small in size, leading to detecting difficulty. The other reason is that the background of the remote sensing image is usually complex, so the algorithms applied to the scenario are easy to be affected by the background. To address the problem of small size, this paper proposes the Multiscale Detection Network (MSDN) which introduces a multiscale detection architecture to detect small-scale aircrafts. With the intention to resist the background noise, this paper proposes the Deeper and Wider Module (DAWM) which increases the perceptual field of the network to alleviate the affection. Besides, to address the two problems simultaneously, this paper introduces the DAWM into the MSDN and names the novel network structure as Multiscale Refined Detection Network (MSRDN). The experimental results show that the MSRDN method has detected the small-scale aircrafts that other algorithms missed and the performance indicators have higher performance than other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Figueiredo, Marco A., Clay S. Gloster, Mark Stephens, Corey A. Graves, and Mouna Nakkar. "Implementation of Multispectral Image Classification on a Remote Adaptive Computer." VLSI Design 10, no. 3 (January 1, 2000): 307–19. http://dx.doi.org/10.1155/2000/31983.

Full text
Abstract:
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms is justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of magnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application that can benefit from implementation on an FPGA-based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm (implemented on a typical general-purpose computer).
APA, Harvard, Vancouver, ISO, and other styles
13

Chebbi, Imen, Nedra Mellouli, Imed Riadh Farah, and Myriam Lamolle. "Big Remote Sensing Image Classification Based on Deep Learning Extraction Features and Distributed Spark Frameworks." Big Data and Cognitive Computing 5, no. 2 (May 5, 2021): 21. http://dx.doi.org/10.3390/bdcc5020021.

Full text
Abstract:
Big data analysis assumes a significant role in Earth observation using remote sensing images, since the explosion of data images from multiple sensors is used in several fields. The traditional data analysis techniques have different limitations on storing and processing massive volumes of data. Besides, big remote sensing data analytics demand sophisticated algorithms based on specific techniques to store to process the data in real-time or in near real-time with high accuracy, efficiency, and high speed. In this paper, we present a method for storing a huge number of heterogeneous satellite images based on Hadoop distributed file system (HDFS) and Apache Spark. We also present how deep learning algorithms such as VGGNet and UNet can be beneficial to big remote sensing data processing for feature extraction and classification. The obtained results prove that our approach outperforms other methods.
APA, Harvard, Vancouver, ISO, and other styles
14

Qiu, Tianqi, Xiaojin Liang, Qingyun Du, Fu Ren, Pengjie Lu, and Chao Wu. "Techniques for the Automatic Detection and Hiding of Sensitive Targets in Emergency Mapping Based on Remote Sensing Data." ISPRS International Journal of Geo-Information 10, no. 2 (February 9, 2021): 68. http://dx.doi.org/10.3390/ijgi10020068.

Full text
Abstract:
Emergency remote sensing mapping can provide support for decision making in disaster assessment or disaster relief, and therefore plays an important role in disaster response. Traditional emergency remote sensing mapping methods use decryption algorithms based on manual retrieval and image editing tools when processing sensitive targets. Although these traditional methods can achieve target recognition, they are inefficient and cannot meet the high time efficiency requirements of disaster relief. In this paper, we combined an object detection model with a generative adversarial network model to build a two-stage deep learning model for sensitive target detection and hiding in remote sensing images, and we verified the model performance on the aircraft object processing problem in remote sensing mapping. To improve the experimental protocol, we introduced a modification to the reconstruction loss function, candidate frame optimization in the region proposal network, the PointRend algorithm, and a modified attention mechanism based on the characteristics of aircraft objects. Experiments revealed that our method is more efficient than traditional manual processing; the precision is 94.87%, the recall is 84.75% higher than that of the original mask R-CNN model, and the F1-score is 44% higher than that of the original model. In addition, our method can quickly and intelligently detect and hide sensitive targets in remote sensing images, thereby shortening the time needed for emergency mapping.
APA, Harvard, Vancouver, ISO, and other styles
15

Shimoi, Nobuhiro, and Yoshihiro Takita. "Mine Remote Sensing Using a Working Robot." Journal of Robotics and Mechatronics 17, no. 1 (February 20, 2005): 101–5. http://dx.doi.org/10.20965/jrm.2005.p0101.

Full text
Abstract:
In conducting mine detection experiments using our prototype robot COMET-1, we developed end effectors on the robot’s working legs. When detecting a mine, a robot must step safely and stably without hitting it. For this study, we created a simulation model to test the movement of a robot having an optical proximity sensor on each foot and used a walking algorithm having compliance control. We verified its efficiency in walking experiments. We also studied the use of remote sensing technology with an IR camera combined with other sensors. Tests with trial mines were used to study the detection of an IR camera and studied technologies for collecting and processing image data in real time for optimum mine detection.
APA, Harvard, Vancouver, ISO, and other styles
16

Kushwah, Chandra Pal, and Kuruna Markam. "Semantic Segmentation of Satellite Images using Deep Learning." Regular issue 10, no. 8 (June 30, 2021): 33–37. http://dx.doi.org/10.35940/ijitee.h9186.0610821.

Full text
Abstract:
Bidirectional in recent years, Deep learning performance in natural scene image processing has improved its use in remote sensing image analysis. In this paper, we used the semantic segmentation of remote sensing images for deep neural networks (DNN). To make it ideal for multi-target semantic segmentation of remote sensing image systems, we boost the Seg Net encoder-decoder CNN structures with index pooling & U-net. The findings reveal that the segmentation of various objects has its benefits and drawbacks for both models. Furthermore, we provide an integrated algorithm that incorporates two models. The test results indicate that the integrated algorithm proposed will take advantage of all multi-target segmentation models and obtain improved segmentation relative to two models.
APA, Harvard, Vancouver, ISO, and other styles
17

Feng, R., X. Li, and H. Shen. "MOUNTAINOUS REMOTE SENSING IMAGES REGISTRATION BASED ON IMPROVED OPTICAL FLOW ESTIMATION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W5 (May 29, 2019): 479–84. http://dx.doi.org/10.5194/isprs-annals-iv-2-w5-479-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Mountainous remote sensing images registration is more complicated than in other areas as geometric distortion caused by topographic relief, which could not be precisely achieved via constructing local mapping functions in the feature-based framework. Optical flow algorithm estimating motion of consecutive frames in computer vision pixel by pixel is introduced for mountainous remote sensing images registration. However, it is sensitive to land cover changes that are inevitable for remote sensing image, resulting in incorrect displacement. To address this problem, we proposed an improved optical flow estimation concentrated on post-processing, namely displacement modification. First of all, the Laplacian of Gaussian (LoG) algorithm is employed to detect the abnormal value in color map of displacement. Then, the abnormal displacement is recalculated in the interpolation surface constructed by the rest accurate displacements. Following the successful coordinate transformation and resampling, the registration outcome is generated. Experiments demonstrated that the proposed method is insensitive in changeable region of mountainous remote sensing image, generating precise registration, outperforming the other local transformation model estimation methods in both visual judgment and quantitative evaluation.</p>
APA, Harvard, Vancouver, ISO, and other styles
18

Alam, Muhammad, Jian-Feng Wang, Cong Guangpei, LV Yunrong, and Yuanfang Chen. "Convolutional Neural Network for the Semantic Segmentation of Remote Sensing Images." Mobile Networks and Applications 26, no. 1 (February 2021): 200–215. http://dx.doi.org/10.1007/s11036-020-01703-3.

Full text
Abstract:
AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.
APA, Harvard, Vancouver, ISO, and other styles
19

Racetin, Ivan, and Andrija Krtalić. "Systematic Review of Anomaly Detection in Hyperspectral Remote Sensing Applications." Applied Sciences 11, no. 11 (May 26, 2021): 4878. http://dx.doi.org/10.3390/app11114878.

Full text
Abstract:
Hyperspectral sensors are passive instruments that record reflected electromagnetic radiation in tens or hundreds of narrow and consecutive spectral bands. In the last two decades, the availability of hyperspectral data has sharply increased, propelling the development of a plethora of hyperspectral classification and target detection algorithms. Anomaly detection methods in hyperspectral images refer to a class of target detection methods that do not require any a-priori knowledge about a hyperspectral scene or target spectrum. They are unsupervised learning techniques that automatically discover rare features on hyperspectral images. This review paper is organized into two parts: part A provides a bibliographic analysis of hyperspectral image processing for anomaly detection in remote sensing applications. Development of the subject field is discussed, and key authors and journals are highlighted. In part B an overview of the topic is presented, starting from the mathematical framework for anomaly detection. The anomaly detection methods were generally categorized as techniques that implement structured or unstructured background models and then organized into appropriate sub-categories. Specific anomaly detection methods are presented with corresponding detection statistics, and their properties are discussed. This paper represents the first review regarding hyperspectral image processing for anomaly detection in remote sensing applications.
APA, Harvard, Vancouver, ISO, and other styles
20

Karim, Shahid, Ye Zhang, Shoulin Yin, Irfana Bibi, and Ali Anwar Brohi. "A brief review and challenges of object detection in optical remote sensing imagery." Multiagent and Grid Systems 16, no. 3 (October 30, 2020): 227–43. http://dx.doi.org/10.3233/mgs-200330.

Full text
Abstract:
Traditional object detection algorithms and strategies are difficult to meet the requirements of data processing efficiency, performance, speed and intelligence in object detection. Through the study and imitation of the cognitive ability of the brain, deep learning can analyze and process the data features. It has a strong ability of visualization and becomes the mainstream algorithm of current object detection applications. Firstly, we have discussed the developments of traditional object detection methods. Secondly, the frameworks of object detection (e.g. Region-based CNN (R-CNN), Spatial Pyramid Pooling Network (SPP-NET), Fast-RCNN and Faster-RCNN) which combine region proposals and convolutional neural networks (CNNs) are briefly characterized for optical remote sensing applications. You only look once (YOLO) algorithm is the representative of the object detection frameworks (e.g. YOLO and Single Shot MultiBox Detector (SSD)) which transforms the object detection into a regression problem. The limitations of remote sensing images and object detectors have been highlighted and discussed. The feasibility and limitations of these approaches will lead researchers to prudently select appropriate image enhancements. Finally, the problems of object detection algorithms in deep learning are summarized and the future recommendations are also conferred.
APA, Harvard, Vancouver, ISO, and other styles
21

Yin, Shoulin, Ye Zhang, and Shahid Karim. "Region search based on hybrid convolutional neural network in optical remote sensing images." International Journal of Distributed Sensor Networks 15, no. 5 (May 2019): 155014771985203. http://dx.doi.org/10.1177/1550147719852036.

Full text
Abstract:
Currently, big data is a new and hot issue. Particularly, the rapid growth of the Internet of Things causes a sharp growth of data. Enormous amounts of networking sensors are continuously collecting and transmitting data to be stored and processed in the cloud, including remote sensing data, environmental data, and geographical data. And region is regarded as the very important object in remote sensing data, which is mainly researched in this article. Region search is a crucial task in remote sensing process, especially for military area and civilian fields. It is difficult to fast search region accurately and achieve generalizability of the regions’ features due to the complex background information, as well as the smaller size. Especially, when processing region search in large-scale remote sensing image, detailed information as the feature can be extracted in inner region. To overcome the above difficulty region search task, we propose an accurate and fast region search in optical remote sensing images under cloud computing environment, which is based on hybrid convolutional neural network. The proposed region search method partitioned into four processes. First, fully convolutional network is adopted to produce all the candidate regions that contain the possible object regions. This process avoids exhaustive search for input images. Then, the features of all candidate regions are extracted by a fast region-based convolutional neural network structure. Third, we design a new difficult sample mining method for the training process. At the end, in order to improve the region search precision, we use an iterative bounding box regression algorithm to normalize the detected bounding boxes, in which the regions contain candidate objects. The proposed algorithm is evaluated on optical remote sensing images acquired from Google Earth. Finally, we conduct the experiments, and the obtained results show that the proposed region search method constantly achieves better results regardless of the type of images tested. Compared with traditional region search methods, such as region-based convolutional neural network and newest feature extraction frameworks, our proposed methods show better robustness with complex context semantic information and backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhou, Jia-Xiang, Zhi-Wei Li, and Chong Fan. "Improved fast mean shift algorithm for remote sensing image segmentation." IET Image Processing 9, no. 5 (May 1, 2015): 389–94. http://dx.doi.org/10.1049/iet-ipr.2014.0393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Belov, A. M., and A. Y. Denisova. "Earth remote sensing imagery classification using a multi-sensor super-resolution fusion algorithm." Computer Optics 44, no. 4 (August 2020): 627–35. http://dx.doi.org/10.18287/2412-6179-co-735.

Full text
Abstract:
Earth remote sensing data fusion is intended to produce images of higher quality than the original ones. However, the fusion impact on further thematic processing remains an open question because fusion methods are mostly used to improve the visual data representation. This article addresses an issue of the effect of fusion with increasing spatial and spectral resolution of data on thematic classification of images using various state-of-the-art classifiers and features extraction methods. In this paper, we use our own algorithm to perform multi-frame image fusion over optical remote sensing images with different spatial and spectral resolutions. For classification, we applied support vector machines and Random Forest algorithms. For features, we used spectral channels, extended attribute profiles and local feature attribute profiles. An experimental study was carried out using model images of four imaging systems. The resulting image had a spatial resolution of 2, 3, 4 and 5 times better than for the original images of each imaging system, respectively. As a result of our studies, it was revealed that for the support vector machines method, fusion was inexpedient since excessive spatial details had a negative effect on the classification. For the Random Forest algorithm, the classification results of a fused image were more accurate than for the original low-resolution images in 90% of cases. For example, for images with the smallest difference in spatial resolution (2 times) from the fusion result, the classification accuracy of the fused image was on average 4% higher. In addition, the results obtained for the Random Forest algorithm with fusion were better than the results for the support vector machines method without fusion. Additionally, it was shown that the classification accuracy of a fused image using the Random Forest method could be increased by an average of 9% due to the use of extended attribute profiles as features. Thus, when using data fusion, it is better to use the Random Forest classifier, whereas using fusion with the support vector machines method is not recommended.
APA, Harvard, Vancouver, ISO, and other styles
24

Watson, Ken. "Processing remote sensing images using the 2-D FFT—Noise reduction and other applications." GEOPHYSICS 58, no. 6 (June 1993): 835–52. http://dx.doi.org/10.1190/1.1443468.

Full text
Abstract:
With the development of faster and less expensive computers, it is now practical to employ algorithms for processing remote sensing images that were only feasible on large mainframe computers a short time ago. The two‐dimensional (2-D) fast Fourier transform (FFT) is a powerful means for removing noise because it can be used to design and implement efficient filters based on the observed spatial or frequency patterns of the noise. Illustrations of transforms of image points and lines in various configurations and periodic patterns are used as a tutorial to identify and reduce a variety of noise patterns of increasing complexity from an assortment of spacecraft and aircraft systems. A general strategy for filtering developed from this study involves: filtering only derived products (not original images), initially removing noise evident in the transform, and applying a “minimum” filter to reduce residual noise. The 2-D FFT is also applied to mosaicking, enlargement, registration, and extraction of albedo and slope information.
APA, Harvard, Vancouver, ISO, and other styles
25

Yao, Ziqiang, Jinlu Jia, and Yurong Qian. "MCNet: Multi-Scale Feature Extraction and Content-Aware Reassembly Cloud Detection Model for Remote Sensing Images." Symmetry 13, no. 1 (December 26, 2020): 28. http://dx.doi.org/10.3390/sym13010028.

Full text
Abstract:
Cloud detection plays a vital role in remote sensing data preprocessing. Traditional cloud detection algorithms have difficulties in feature extraction and thus produce a poor detection result when processing remote sensing images with uneven cloud distribution and complex surface background. To achieve better detection results, a cloud detection method with multi-scale feature extraction and content-aware reassembly network (MCNet) is proposed. Using pyramid convolution and channel attention mechanisms to enhance the model’s feature extraction capability, MCNet can fully extract the spatial information and channel information of clouds in an image. The content-aware reassembly is used to ensure that sampling on the network can recover enough in-depth semantic information and improve the model cloud detection effect. The experimental results show that the proposed MCNet model has achieved good detection results in cloud detection tasks.
APA, Harvard, Vancouver, ISO, and other styles
26

Tan, Yihua, Shengzhou Xiong, Zhi Li, Jinwen Tian, and Yansheng Li. "Accurate Detection of Built-Up Areas from High-Resolution Remote Sensing Imagery Using a Fully Convolutional Network." Photogrammetric Engineering & Remote Sensing 85, no. 10 (October 1, 2019): 737–52. http://dx.doi.org/10.14358/pers.85.10.737.

Full text
Abstract:
The analysis of built-up areas has always been a popular research topic for remote sensing applications. However, automatic extraction of built-up areas from a wide range of regions remains challenging. In this article, a fully convolutional network (FCN)–based strategy is proposed to address built-up area extraction. The proposed algorithm can be divided into two main steps. First, divide the remote sensing image into blocks and extract their deep features by a lightweight multi-branch convolutional neural network (LMB-CNN). Second, rearrange the deep features into feature maps that are fed into a well-designed FCN for image segmentation. Our FCN is integrated with multi-branch blocks and outputs multi-channel segmentation masks that are utilized to balance the false alarm and missing alarm. Experiments demonstrate that the overall classification accuracy of the proposed algorithm can achieve 98.75% in the test data set and that it has a faster processing compared with the existing state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Guobin, Zhiyong Jiang, and M. M. Kamruzzaman. "Radar remote sensing image retrieval algorithm based on improved Sobel operator." Journal of Visual Communication and Image Representation 71 (August 2020): 102720. http://dx.doi.org/10.1016/j.jvcir.2019.102720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Xia, Y., P. d’Angelo, J. Tian, and P. Reinartz. "DENSE MATCHING COMPARISON BETWEEN CLASSICAL AND DEEP LEARNING BASED ALGORITHMS FOR REMOTE SENSING DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 521–25. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-521-2020.

Full text
Abstract:
Abstract. Deep learning and convolutional neural networks (CNN) have obtained a great success in image processing, by means of its powerful feature extraction ability to learn specific tasks. Many deep learning based algorithms have been developed for dense image matching, which is a hot topic in the community of computer vision. These methods are tested for close-range or street-view stereo data, however, not well studied with remote sensing datasets, including aerial and satellite data. As more high-quality datasets are collected by recent airborne and spaceborne sensors, it is necessary to compare the performance of these algorithms to classical dense matching algorithms on remote sensing data. In this paper, Guided Aggregation Net (GA-Net), which belongs to the most competitive algorithms on KITTI 2015 benchmark (street-view dataset), is tested and compared with Semi-Global Matching (SGM) on satellite and airborne data. GA-Net is an end-to-end neural network, which starts from an stereo pair and directly outputs a disparity map indicating the scene’s depth information. It is based on a differentiable approximation of SGM embedded into a neural network, performing well for ill-posed regions, such as textureless areas, slanted surfaces, etc. The results demonstrate that GA-Net is capable of producing a smoother disparity map with less errors, particularly for across track data acquired at different dates.
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Z., X. Chen, Y. Gao, and Y. Li. "RAPID TARGET DETECTION IN HIGH RESOLUTION REMOTE SENSING IMAGES USING YOLO MODEL." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 1915–20. http://dx.doi.org/10.5194/isprs-archives-xlii-3-1915-2018.

Full text
Abstract:
Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

Yakubu, Bashir Ishaku, Shua’ib Musa Hassan, and Sallau Osisiemo Asiribo. "AN ASSESSMENT OF SPATIAL VARIATION OF LAND SURFACE CHARACTERISTICS OF MINNA, NIGER STATE NIGERIA FOR SUSTAINABLE URBANIZATION USING GEOSPATIAL TECHNIQUES." Geosfera Indonesia 3, no. 2 (August 28, 2018): 27. http://dx.doi.org/10.19184/geosi.v3i2.7934.

Full text
Abstract:
Rapid urbanization rates impact significantly on the nature of Land Cover patterns of the environment, which has been evident in the depletion of vegetal reserves and in general modifying the human climatic systems (Henderson, et al., 2017; Kumar, Masago, Mishra, & Fukushi, 2018; Luo and Lau, 2017). This study explores remote sensing classification technique and other auxiliary data to determine LULCC for a period of 50 years (1967-2016). The LULCC types identified were quantitatively evaluated using the change detection approach from results of maximum likelihood classification algorithm in GIS. Accuracy assessment results were evaluated and found to be between 56 to 98 percent of the LULC classification. The change detection analysis revealed change in the LULC types in Minna from 1976 to 2016. Built-up area increases from 74.82ha in 1976 to 116.58ha in 2016. Farmlands increased from 2.23 ha to 46.45ha and bared surface increases from 120.00ha to 161.31ha between 1976 to 2016 resulting to decline in vegetation, water body, and wetlands. The Decade of rapid urbanization was found to coincide with the period of increased Public Private Partnership Agreement (PPPA). Increase in farmlands was due to the adoption of urban agriculture which has influence on food security and the environmental sustainability. The observed increase in built up areas, farmlands and bare surfaces has substantially led to reduction in vegetation and water bodies. The oscillatory nature of water bodies LULCC which was not particularly consistent with the rates of urbanization also suggests that beyond the urbanization process, other factors may influence the LULCC of water bodies in urban settlements. Keywords: Minna, Niger State, Remote Sensing, Land Surface Characteristics References Akinrinmade, A., Ibrahim, K., & Abdurrahman, A. (2012). Geological Investigation of Tagwai Dams using Remote Sensing Technique, Minna Niger State, Nigeria. Journal of Environment, 1(01), pp. 26-32. Amadi, A., & Olasehinde, P. (2010). Application of remote sensing techniques in hydrogeological mapping of parts of Bosso Area, Minna, North-Central Nigeria. International Journal of Physical Sciences, 5(9), pp. 1465-1474. Aplin, P., & Smith, G. (2008). Advances in object-based image classification. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 37(B7), pp. 725-728. Ayele, G. T., Tebeje, A. K., Demissie, S. S., Belete, M. A., Jemberrie, M. A., Teshome, W. M., . . . Teshale, E. Z. (2018). Time Series Land Cover Mapping and Change Detection Analysis Using Geographic Information System and Remote Sensing, Northern Ethiopia. Air, Soil and Water Research, 11, p 1178622117751603. Azevedo, J. A., Chapman, L., & Muller, C. L. (2016). Quantifying the daytime and night-time urban heat island in Birmingham, UK: a comparison of satellite derived land surface temperature and high resolution air temperature observations. Remote Sensing, 8(2), p 153. Blaschke, T., Hay, G. J., Kelly, M., Lang, S., Hofmann, P., Addink, E., . . . van Coillie, F. (2014). Geographic object-based image analysis–towards a new paradigm. ISPRS Journal of Photogrammetry and Remote Sensing, 87, pp. 180-191. Bukata, R. P., Jerome, J. H., Kondratyev, A. S., & Pozdnyakov, D. V. (2018). Optical properties and remote sensing of inland and coastal waters: CRC press. Camps-Valls, G., Tuia, D., Bruzzone, L., & Benediktsson, J. A. (2014). Advances in hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE signal processing magazine, 31(1), pp. 45-54. Chen, J., Chen, J., Liao, A., Cao, X., Chen, L., Chen, X., . . . Lu, M. (2015). Global land cover mapping at 30 m resolution: A POK-based operational approach. ISPRS Journal of Photogrammetry and Remote Sensing, 103, pp. 7-27. Chen, M., Mao, S., & Liu, Y. (2014). Big data: A survey. Mobile networks and applications, 19(2), pp. 171-209. Cheng, G., Han, J., Guo, L., Liu, Z., Bu, S., & Ren, J. (2015). Effective and efficient midlevel visual elements-oriented land-use classification using VHR remote sensing images. IEEE transactions on geoscience and remote sensing, 53(8), pp. 4238-4249. Cheng, G., Han, J., Zhou, P., & Guo, L. (2014). Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS Journal of Photogrammetry and Remote Sensing, 98, pp. 119-132. Coale, A. J., & Hoover, E. M. (2015). Population growth and economic development: Princeton University Press. Congalton, R. G., & Green, K. (2008). Assessing the accuracy of remotely sensed data: principles and practices: CRC press. Corner, R. J., Dewan, A. M., & Chakma, S. (2014). Monitoring and prediction of land-use and land-cover (LULC) change Dhaka megacity (pp. 75-97): Springer. Coutts, A. M., Harris, R. J., Phan, T., Livesley, S. J., Williams, N. S., & Tapper, N. J. (2016). Thermal infrared remote sensing of urban heat: Hotspots, vegetation, and an assessment of techniques for use in urban planning. Remote Sensing of Environment, 186, pp. 637-651. Debnath, A., Debnath, J., Ahmed, I., & Pan, N. D. (2017). Change detection in Land use/cover of a hilly area by Remote Sensing and GIS technique: A study on Tropical forest hill range, Baramura, Tripura, Northeast India. International journal of geomatics and geosciences, 7(3), pp. 293-309. Desheng, L., & Xia, F. (2010). Assessing object-based classification: advantages and limitations. Remote Sensing Letters, 1(4), pp. 187-194. Dewan, A. M., & Yamaguchi, Y. (2009). Land use and land cover change in Greater Dhaka, Bangladesh: Using remote sensing to promote sustainable urbanization. Applied Geography, 29(3), pp. 390-401. Dronova, I., Gong, P., Wang, L., & Zhong, L. (2015). Mapping dynamic cover types in a large seasonally flooded wetland using extended principal component analysis and object-based classification. Remote Sensing of Environment, 158, pp. 193-206. Duro, D. C., Franklin, S. E., & Dubé, M. G. (2012). A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sensing of Environment, 118, pp. 259-272. Elmhagen, B., Destouni, G., Angerbjörn, A., Borgström, S., Boyd, E., Cousins, S., . . . Hambäck, P. (2015). Interacting effects of change in climate, human population, land use, and water use on biodiversity and ecosystem services. Ecology and Society, 20(1) Farhani, S., & Ozturk, I. (2015). Causal relationship between CO 2 emissions, real GDP, energy consumption, financial development, trade openness, and urbanization in Tunisia. Environmental Science and Pollution Research, 22(20), pp. 15663-15676. Feng, L., Chen, B., Hayat, T., Alsaedi, A., & Ahmad, B. (2017). The driving force of water footprint under the rapid urbanization process: a structural decomposition analysis for Zhangye city in China. Journal of Cleaner Production, 163, pp. S322-S328. Fensham, R., & Fairfax, R. (2002). Aerial photography for assessing vegetation change: a review of applications and the relevance of findings for Australian vegetation history. Australian Journal of Botany, 50(4), pp. 415-429. Ferreira, N., Lage, M., Doraiswamy, H., Vo, H., Wilson, L., Werner, H., . . . Silva, C. (2015). Urbane: A 3d framework to support data driven decision making in urban development. Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on. Garschagen, M., & Romero-Lankao, P. (2015). Exploring the relationships between urbanization trends and climate change vulnerability. Climatic Change, 133(1), pp. 37-52. Gokturk, S. B., Sumengen, B., Vu, D., Dalal, N., Yang, D., Lin, X., . . . Torresani, L. (2015). System and method for search portions of objects in images and features thereof: Google Patents. Government, N. S. (2007). Niger state (The Power State). Retrieved from http://nigerstate.blogspot.com.ng/ Green, K., Kempka, D., & Lackey, L. (1994). Using remote sensing to detect and monitor land-cover and land-use change. Photogrammetric engineering and remote sensing, 60(3), pp. 331-337. Gu, W., Lv, Z., & Hao, M. (2017). Change detection method for remote sensing images based on an improved Markov random field. Multimedia Tools and Applications, 76(17), pp. 17719-17734. Guo, Y., & Shen, Y. (2015). Quantifying water and energy budgets and the impacts of climatic and human factors in the Haihe River Basin, China: 2. Trends and implications to water resources. Journal of Hydrology, 527, pp. 251-261. Hadi, F., Thapa, R. B., Helmi, M., Hazarika, M. K., Madawalagama, S., Deshapriya, L. N., & Center, G. (2016). Urban growth and land use/land cover modeling in Semarang, Central Java, Indonesia: Colombo-Srilanka, ACRS2016. Hagolle, O., Huc, M., Villa Pascual, D., & Dedieu, G. (2015). A multi-temporal and multi-spectral method to estimate aerosol optical thickness over land, for the atmospheric correction of FormoSat-2, LandSat, VENμS and Sentinel-2 images. Remote Sensing, 7(3), pp. 2668-2691. Hegazy, I. R., & Kaloop, M. R. (2015). Monitoring urban growth and land use change detection with GIS and remote sensing techniques in Daqahlia governorate Egypt. International Journal of Sustainable Built Environment, 4(1), pp. 117-124. Henderson, J. V., Storeygard, A., & Deichmann, U. (2017). Has climate change driven urbanization in Africa? Journal of development economics, 124, pp. 60-82. Hu, L., & Brunsell, N. A. (2015). A new perspective to assess the urban heat island through remotely sensed atmospheric profiles. Remote Sensing of Environment, 158, pp. 393-406. Hughes, S. J., Cabral, J. A., Bastos, R., Cortes, R., Vicente, J., Eitelberg, D., . . . Santos, M. (2016). A stochastic dynamic model to assess land use change scenarios on the ecological status of fluvial water bodies under the Water Framework Directive. Science of the Total Environment, 565, pp. 427-439. Hussain, M., Chen, D., Cheng, A., Wei, H., & Stanley, D. (2013). Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS Journal of Photogrammetry and Remote Sensing, 80, pp. 91-106. Hyyppä, J., Hyyppä, H., Inkinen, M., Engdahl, M., Linko, S., & Zhu, Y.-H. (2000). Accuracy comparison of various remote sensing data sources in the retrieval of forest stand attributes. Forest Ecology and Management, 128(1-2), pp. 109-120. Jiang, L., Wu, F., Liu, Y., & Deng, X. (2014). Modeling the impacts of urbanization and industrial transformation on water resources in China: an integrated hydro-economic CGE analysis. Sustainability, 6(11), pp. 7586-7600. Jin, S., Yang, L., Zhu, Z., & Homer, C. (2017). A land cover change detection and classification protocol for updating Alaska NLCD 2001 to 2011. Remote Sensing of Environment, 195, pp. 44-55. Joshi, N., Baumann, M., Ehammer, A., Fensholt, R., Grogan, K., Hostert, P., . . . Mitchard, E. T. (2016). A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring. Remote Sensing, 8(1), p 70. Kaliraj, S., Chandrasekar, N., & Magesh, N. (2015). Evaluation of multiple environmental factors for site-specific groundwater recharge structures in the Vaigai River upper basin, Tamil Nadu, India, using GIS-based weighted overlay analysis. Environmental earth sciences, 74(5), pp. 4355-4380. Koop, S. H., & van Leeuwen, C. J. (2015). Assessment of the sustainability of water resources management: A critical review of the City Blueprint approach. Water Resources Management, 29(15), pp. 5649-5670. Kumar, P., Masago, Y., Mishra, B. K., & Fukushi, K. (2018). Evaluating future stress due to combined effect of climate change and rapid urbanization for Pasig-Marikina River, Manila. Groundwater for Sustainable Development, 6, pp. 227-234. Lang, S. (2008). Object-based image analysis for remote sensing applications: modeling reality–dealing with complexity Object-based image analysis (pp. 3-27): Springer. Li, M., Zang, S., Zhang, B., Li, S., & Wu, C. (2014). A review of remote sensing image classification techniques: The role of spatio-contextual information. European Journal of Remote Sensing, 47(1), pp. 389-411. Liddle, B. (2014). Impact of population, age structure, and urbanization on carbon emissions/energy consumption: evidence from macro-level, cross-country analyses. Population and Environment, 35(3), pp. 286-304. Lillesand, T., Kiefer, R. W., & Chipman, J. (2014). Remote sensing and image interpretation: John Wiley & Sons. Liu, Y., Wang, Y., Peng, J., Du, Y., Liu, X., Li, S., & Zhang, D. (2015). Correlations between urbanization and vegetation degradation across the world’s metropolises using DMSP/OLS nighttime light data. Remote Sensing, 7(2), pp. 2067-2088. López, E., Bocco, G., Mendoza, M., & Duhau, E. (2001). Predicting land-cover and land-use change in the urban fringe: a case in Morelia city, Mexico. Landscape and urban planning, 55(4), pp. 271-285. Luo, M., & Lau, N.-C. (2017). Heat waves in southern China: Synoptic behavior, long-term change, and urbanization effects. Journal of Climate, 30(2), pp. 703-720. Mahboob, M. A., Atif, I., & Iqbal, J. (2015). Remote sensing and GIS applications for assessment of urban sprawl in Karachi, Pakistan. Science, Technology and Development, 34(3), pp. 179-188. Mallinis, G., Koutsias, N., Tsakiri-Strati, M., & Karteris, M. (2008). Object-based classification using Quickbird imagery for delineating forest vegetation polygons in a Mediterranean test site. ISPRS Journal of Photogrammetry and Remote Sensing, 63(2), pp. 237-250. Mas, J.-F., Velázquez, A., Díaz-Gallegos, J. R., Mayorga-Saucedo, R., Alcántara, C., Bocco, G., . . . Pérez-Vega, A. (2004). Assessing land use/cover changes: a nationwide multidate spatial database for Mexico. International Journal of Applied Earth Observation and Geoinformation, 5(4), pp. 249-261. Mathew, A., Chaudhary, R., Gupta, N., Khandelwal, S., & Kaul, N. (2015). Study of Urban Heat Island Effect on Ahmedabad City and Its Relationship with Urbanization and Vegetation Parameters. International Journal of Computer & Mathematical Science, 4, pp. 2347-2357. Megahed, Y., Cabral, P., Silva, J., & Caetano, M. (2015). Land cover mapping analysis and urban growth modelling using remote sensing techniques in greater Cairo region—Egypt. ISPRS International Journal of Geo-Information, 4(3), pp. 1750-1769. Metternicht, G. (2001). Assessing temporal and spatial changes of salinity using fuzzy logic, remote sensing and GIS. Foundations of an expert system. Ecological modelling, 144(2-3), pp. 163-179. Miller, R. B., & Small, C. (2003). Cities from space: potential applications of remote sensing in urban environmental research and policy. Environmental Science & Policy, 6(2), pp. 129-137. Mirzaei, P. A. (2015). Recent challenges in modeling of urban heat island. Sustainable Cities and Society, 19, pp. 200-206. Mohammed, I., Aboh, H., & Emenike, E. (2007). A regional geoelectric investigation for groundwater exploration in Minna area, north west Nigeria. Science World Journal, 2(4) Morenikeji, G., Umaru, E., Liman, S., & Ajagbe, M. (2015). Application of Remote Sensing and Geographic Information System in Monitoring the Dynamics of Landuse in Minna, Nigeria. International Journal of Academic Research in Business and Social Sciences, 5(6), pp. 320-337. Mukherjee, A. B., Krishna, A. P., & Patel, N. (2018). Application of Remote Sensing Technology, GIS and AHP-TOPSIS Model to Quantify Urban Landscape Vulnerability to Land Use Transformation Information and Communication Technology for Sustainable Development (pp. 31-40): Springer. Myint, S. W., Gober, P., Brazel, A., Grossman-Clarke, S., & Weng, Q. (2011). Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sensing of Environment, 115(5), pp. 1145-1161. Nemmour, H., & Chibani, Y. (2006). Multiple support vector machines for land cover change detection: An application for mapping urban extensions. ISPRS Journal of Photogrammetry and Remote Sensing, 61(2), pp. 125-133. Niu, X., & Ban, Y. (2013). Multi-temporal RADARSAT-2 polarimetric SAR data for urban land-cover classification using an object-based support vector machine and a rule-based approach. International journal of remote sensing, 34(1), pp. 1-26. Nogueira, K., Penatti, O. A., & dos Santos, J. A. (2017). Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognition, 61, pp. 539-556. Oguz, H., & Zengin, M. (2011). Analyzing land use/land cover change using remote sensing data and landscape structure metrics: a case study of Erzurum, Turkey. Fresenius Environmental Bulletin, 20(12), pp. 3258-3269. Pohl, C., & Van Genderen, J. L. (1998). Review article multisensor image fusion in remote sensing: concepts, methods and applications. International journal of remote sensing, 19(5), pp. 823-854. Price, O., & Bradstock, R. (2014). Countervailing effects of urbanization and vegetation extent on fire frequency on the Wildland Urban Interface: Disentangling fuel and ignition effects. Landscape and urban planning, 130, pp. 81-88. Prosdocimi, I., Kjeldsen, T., & Miller, J. (2015). Detection and attribution of urbanization effect on flood extremes using nonstationary flood‐frequency models. Water resources research, 51(6), pp. 4244-4262. Rawat, J., & Kumar, M. (2015). Monitoring land use/cover change using remote sensing and GIS techniques: A case study of Hawalbagh block, district Almora, Uttarakhand, India. The Egyptian Journal of Remote Sensing and Space Science, 18(1), pp. 77-84. Rokni, K., Ahmad, A., Solaimani, K., & Hazini, S. (2015). A new approach for surface water change detection: Integration of pixel level image fusion and image classification techniques. International Journal of Applied Earth Observation and Geoinformation, 34, pp. 226-234. Sakieh, Y., Amiri, B. J., Danekar, A., Feghhi, J., & Dezhkam, S. (2015). Simulating urban expansion and scenario prediction using a cellular automata urban growth model, SLEUTH, through a case study of Karaj City, Iran. Journal of Housing and the Built Environment, 30(4), pp. 591-611. Santra, A. (2016). Land Surface Temperature Estimation and Urban Heat Island Detection: A Remote Sensing Perspective. Remote Sensing Techniques and GIS Applications in Earth and Environmental Studies, p 16. Shrivastava, L., & Nag, S. (2017). MONITORING OF LAND USE/LAND COVER CHANGE USING GIS AND REMOTE SENSING TECHNIQUES: A CASE STUDY OF SAGAR RIVER WATERSHED, TRIBUTARY OF WAINGANGA RIVER OF MADHYA PRADESH, INDIA. Shuaibu, M., & Sulaiman, I. (2012). Application of remote sensing and GIS in land cover change detection in Mubi, Adamawa State, Nigeria. J Technol Educ Res, 5, pp. 43-55. Song, B., Li, J., Dalla Mura, M., Li, P., Plaza, A., Bioucas-Dias, J. M., . . . Chanussot, J. (2014). Remotely sensed image classification using sparse representations of morphological attribute profiles. IEEE transactions on geoscience and remote sensing, 52(8), pp. 5122-5136. Song, X.-P., Sexton, J. O., Huang, C., Channan, S., & Townshend, J. R. (2016). Characterizing the magnitude, timing and duration of urban growth from time series of Landsat-based estimates of impervious cover. Remote Sensing of Environment, 175, pp. 1-13. Tayyebi, A., Shafizadeh-Moghadam, H., & Tayyebi, A. H. (2018). Analyzing long-term spatio-temporal patterns of land surface temperature in response to rapid urbanization in the mega-city of Tehran. Land Use Policy, 71, pp. 459-469. Teodoro, A. C., Gutierres, F., Gomes, P., & Rocha, J. (2018). Remote Sensing Data and Image Classification Algorithms in the Identification of Beach Patterns Beach Management Tools-Concepts, Methodologies and Case Studies (pp. 579-587): Springer. Toth, C., & Jóźków, G. (2016). Remote sensing platforms and sensors: A survey. ISPRS Journal of Photogrammetry and Remote Sensing, 115, pp. 22-36. Tuholske, C., Tane, Z., López-Carr, D., Roberts, D., & Cassels, S. (2017). Thirty years of land use/cover change in the Caribbean: Assessing the relationship between urbanization and mangrove loss in Roatán, Honduras. Applied Geography, 88, pp. 84-93. Tuia, D., Flamary, R., & Courty, N. (2015). Multiclass feature learning for hyperspectral image classification: Sparse and hierarchical solutions. ISPRS Journal of Photogrammetry and Remote Sensing, 105, pp. 272-285. Tzotsos, A., & Argialas, D. (2008). Support vector machine classification for object-based image analysis Object-Based Image Analysis (pp. 663-677): Springer. Wang, L., Sousa, W., & Gong, P. (2004). Integration of object-based and pixel-based classification for mapping mangroves with IKONOS imagery. International journal of remote sensing, 25(24), pp. 5655-5668. Wang, Q., Zeng, Y.-e., & Wu, B.-w. (2016). Exploring the relationship between urbanization, energy consumption, and CO2 emissions in different provinces of China. Renewable and Sustainable Energy Reviews, 54, pp. 1563-1579. Wang, S., Ma, H., & Zhao, Y. (2014). Exploring the relationship between urbanization and the eco-environment—A case study of Beijing–Tianjin–Hebei region. Ecological Indicators, 45, pp. 171-183. Weitkamp, C. (2006). Lidar: range-resolved optical remote sensing of the atmosphere: Springer Science & Business. Wellmann, T., Haase, D., Knapp, S., Salbach, C., Selsam, P., & Lausch, A. (2018). Urban land use intensity assessment: The potential of spatio-temporal spectral traits with remote sensing. Ecological Indicators, 85, pp. 190-203. Whiteside, T. G., Boggs, G. S., & Maier, S. W. (2011). Comparing object-based and pixel-based classifications for mapping savannas. International Journal of Applied Earth Observation and Geoinformation, 13(6), pp. 884-893. Willhauck, G., Schneider, T., De Kok, R., & Ammer, U. (2000). Comparison of object oriented classification techniques and standard image analysis for the use of change detection between SPOT multispectral satellite images and aerial photos. Proceedings of XIX ISPRS congress. Winker, D. M., Vaughan, M. A., Omar, A., Hu, Y., Powell, K. A., Liu, Z., . . . Young, S. A. (2009). Overview of the CALIPSO mission and CALIOP data processing algorithms. Journal of Atmospheric and Oceanic Technology, 26(11), pp. 2310-2323. Yengoh, G. T., Dent, D., Olsson, L., Tengberg, A. E., & Tucker III, C. J. (2015). Use of the Normalized Difference Vegetation Index (NDVI) to Assess Land Degradation at Multiple Scales: Current Status, Future Trends, and Practical Considerations: Springer. Yu, Q., Gong, P., Clinton, N., Biging, G., Kelly, M., & Schirokauer, D. (2006). Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogrammetric Engineering & Remote Sensing, 72(7), pp. 799-811. Zhou, D., Zhao, S., Zhang, L., & Liu, S. (2016). Remotely sensed assessment of urbanization effects on vegetation phenology in China's 32 major cities. Remote Sensing of Environment, 176, pp. 272-281. Zhu, Z., Fu, Y., Woodcock, C. E., Olofsson, P., Vogelmann, J. E., Holden, C., . . . Yu, Y. (2016). Including land cover change in analysis of greenness trends using all available Landsat 5, 7, and 8 images: A case study from Guangzhou, China (2000–2014). Remote Sensing of Environment, 185, pp. 243-257.
APA, Harvard, Vancouver, ISO, and other styles
31

Beiderman, Yevgeny, Mark Kunin, Eli Kolberg, Ilan Halachmi, Binyamin Abramov, Rafael Amsalem, and Zeev Zalevsky. "Automatic solution for detection, identification and biomedical monitoring of a cow using remote sensing for optimised treatment of cattle." Journal of Agricultural Engineering 45, no. 4 (December 21, 2014): 153. http://dx.doi.org/10.4081/jae.2014.418.

Full text
Abstract:
In this paper we show how a novel photonic remote sensing system assembled on a robotic platform can extract vital biomedical parameters from cattle including their heart beating, breathing and chewing activity. The sensor is based upon a camera and a laser using selfinterference phenomena. The whole system intends to provide an automatic solution for detection, identification and biomedical monitoring of a cow. The detection algorithm is based upon image processing involving probability map construction. The identification algorithms involve well known image pattern recognition techniques. The sensor is used on top of an automated robotic platform in order to support animal decision making. Field tests and computer simulated results are presented.
APA, Harvard, Vancouver, ISO, and other styles
32

Illarionova, Svetlana, Dmitrii Shadrin, Alexey Trekin, Vladimir Ignatiev, and Ivan Oseledets. "Generation of the NIR Spectral Band for Satellite Images with Convolutional Neural Networks." Sensors 21, no. 16 (August 21, 2021): 5646. http://dx.doi.org/10.3390/s21165646.

Full text
Abstract:
The near-infrared (NIR) spectral range (from 780 to 2500 nm) of the multispectral remote sensing imagery provides vital information for landcover classification, especially concerning vegetation assessment. Despite the usefulness of NIR, it does not always accomplish common RGB. Modern achievements in image processing via deep neural networks make it possible to generate artificial spectral information, for example, to solve the image colorization problem. In this research, we aim to investigate whether this approach can produce not only visually similar images but also an artificial spectral band that can improve the performance of computer vision algorithms for solving remote sensing tasks. We study the use of a generative adversarial network (GAN) approach in the task of the NIR band generation using only RGB channels of high-resolution satellite imagery. We evaluate the impact of a generated channel on the model performance to solve the forest segmentation task. Our results show an increase in model accuracy when using generated NIR compared to the baseline model, which uses only RGB (0.947 and 0.914 F1-scores, respectively). The presented study shows the advantages of generating the extra band such as the opportunity to reduce the required amount of labeled data.
APA, Harvard, Vancouver, ISO, and other styles
33

Thirugnanam, Mythili, and S. Margret Anouncia. "Evaluating the performance of various segmentation techniques in industrial radiographs." Cybernetics and Information Technologies 14, no. 1 (March 1, 2014): 161–71. http://dx.doi.org/10.2478/cait-2014-0013.

Full text
Abstract:
Abstract At present, image processing concepts are widely used in different fields, such as remote sensing, communication, medical imaging, forensics and industrial inspection. Image segmentation is one of the key processes in image processing key stages. Segmentation is a process of extracting various features of the image which can be merged or split to build the object of interest, on which image analysis and interpretation can be performed. Many researchers have proposed various segmentation algorithms to extract the region of interest from an image in various domains. Each segmentation algorithm has its own pros and cons based on the nature of the image and its quality. Especially, extracting a region of interest from a gray scale image is incredibly complex compared to colour images. This paper attempts to perform a study of various widely used segmentation techniques in gray scale images, mostly in industrial radiographic images that would help the process of defects detection in non-destructive testing.
APA, Harvard, Vancouver, ISO, and other styles
34

Giacinto, Giorgio, Fabio Roli, and Lorenzo Bruzzone. "Combination of neural and statistical algorithms for supervised classification of remote-sensing images." Pattern Recognition Letters 21, no. 5 (May 2000): 385–97. http://dx.doi.org/10.1016/s0167-8655(00)00006-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Filimonov, P. A., M. L. Belov, S. E. Ivanov, V. A. Gorodnichev, and Yu V. Fedotov. "An algorithm for measuring wind speed based on sampling aerosol inhomogeneities." Computer Optics 44, no. 5 (October 2020): 791–96. http://dx.doi.org/10.18287/2412-6179-co-708.

Full text
Abstract:
A digital image processing algorithm based on sampling aerosol inhomogeneities was developed in the applied problem of laser remote sensing for measuring the velocity of wind. Tests of the developed algorithm were conducted for synthetic data from numerical simulations and data measured by a lidar. The algorithm developed performs processing of the field of aerosol backscattering coefficient in “RangeTime” coordinates and sufficiently increases the measurement accuracy in comparison with correlation methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Jing Jing, and Hong Jun Wang. "A Non-Rigid Image Registration Algorithm Based on NURBS." Applied Mechanics and Materials 170-173 (May 2012): 3521–24. http://dx.doi.org/10.4028/www.scientific.net/amm.170-173.3521.

Full text
Abstract:
Non-rigid image registration is an interesting and challenging research work in medical image processing, computer vision and remote sensing fields. In this paper we present a free form deformable algorithm based on NURBS because NURBS (Non-uniform Rational B Spline ) with a non-uniform grid has a higher registration precision and a higher registration speed in comparison with B spline. In our experiment we compare the NURBS based FFD method with the B spline based FFD method quantitatively. The experiment result shows that the algorithm can improve highly the registration precision.
APA, Harvard, Vancouver, ISO, and other styles
37

Maglinets, Yurii A., and Ruslan V. Brezhnev. "Technology of establishment of terrestrial objects aerospace survey-based object-oriented monitoring systems." E3S Web of Conferences 223 (2020): 01002. http://dx.doi.org/10.1051/e3sconf/202022301002.

Full text
Abstract:
The paper studies a technology for the establishment of Earth remote sensing systems based on the object-oriented approach to the complex system analysis and design. The technology relies upon the system of knowledge combining the knowledge of a certain subject domain, the knowledge of image processing algorithms and the typology of the problems in question. Another important element of the technology is a set of human-computer interaction methods that make it possible to arrange a task setting and solving cycle without or with little involvement of image processing experts. The paper presents the input conditions and the main stages of the technology, as well as examples of possible use of the system in agricultural monitoring.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Juan, Wei Liu, and Xinxin Zhang. "Evaluation algorithm of alhagi sparsifolia desertification control under different irrigation amounts." Earth Sciences Research Journal 24, no. 4 (January 26, 2021): 449–57. http://dx.doi.org/10.15446/esrj.v24n4.91626.

Full text
Abstract:
Desertification control is an important issue that must be considered in modern society. In order to effectively improve the accuracy and practicability of the evaluation algorithm of desertification control effect, the Alhagi sparsifolia index under different irrigation amount was taken as the research object, and the evaluation algorithm of desertification control effect was proposed. In the “vegetation-sandstorm-soil” index system, a number of indexes were selected according to the core environmental parameters of Alhagi sparsifolia and grassland desertification. And the analytic hierarchy process, remote sensing, geographic information system, and landscape technology were used to assign index weights of desertification control capacity, which were calculated by multiple discriminant matrices. Finally, the data regression analysis was performed based on remote sensing and computer image information screening and processing to determine the final evaluation results. The experimental data show that the true positive rate of the algorithm in this paper is between 160 and 200, which is within a large range of advantages, indicating that the overall evaluation accuracy of the algorithm is high and the evaluation effect is perfect.
APA, Harvard, Vancouver, ISO, and other styles
39

Touzani, Samir, and Jessica Granderson. "Open Data and Deep Semantic Segmentation for Automated Extraction of Building Footprints." Remote Sensing 13, no. 13 (July 1, 2021): 2578. http://dx.doi.org/10.3390/rs13132578.

Full text
Abstract:
Advances in machine learning and computer vision, combined with increased access to unstructured data (e.g., images and text), have created an opportunity for automated extraction of building characteristics, cost-effectively, and at scale. These characteristics are relevant to a variety of urban and energy applications, yet are time consuming and costly to acquire with today’s manual methods. Several recent research studies have shown that in comparison to more traditional methods that are based on features engineering approach, an end-to-end learning approach based on deep learning algorithms significantly improved the accuracy of automatic building footprint extraction from remote sensing images. However, these studies used limited benchmark datasets that have been carefully curated and labeled. How the accuracy of these deep learning-based approach holds when using less curated training data has not received enough attention. The aim of this work is to leverage the openly available data to automatically generate a larger training dataset with more variability in term of regions and type of cities, which can be used to build more accurate deep learning models. In contrast to most benchmark datasets, the gathered data have not been manually curated. Thus, the training dataset is not perfectly clean in terms of remote sensing images exactly matching the ground truth building’s foot-print. A workflow that includes data pre-processing, deep learning semantic segmentation modeling, and results post-processing is introduced and applied to a dataset that include remote sensing images from 15 cities and five counties from various region of the USA, which include 8,607,677 buildings. The accuracy of the proposed approach was measured on an out of sample testing dataset corresponding to 364,000 buildings from three USA cities. The results favorably compared to those obtained from Microsoft’s recently released US building footprint dataset.
APA, Harvard, Vancouver, ISO, and other styles
40

Rashid, Ahmad Rafiuddin, and Arjun Chennu. "A Trillion Coral Reef Colors: Deeply Annotated Underwater Hyperspectral Images for Automated Classification and Habitat Mapping." Data 5, no. 1 (February 18, 2020): 19. http://dx.doi.org/10.3390/data5010019.

Full text
Abstract:
This paper describes a large dataset of underwater hyperspectral imagery that can be used by researchers in the domains of computer vision, machine learning, remote sensing, and coral reef ecology. We present the details of underwater data acquisition, processing and curation to create this large dataset of coral reef imagery annotated for habitat mapping. A diver-operated hyperspectral imaging system (HyperDiver) was used to survey 147 transects at 8 coral reef sites around the Caribbean island of Curaçao. The underwater proximal sensing approach produced fine-scale images of the seafloor, with more than 2.2 billion points of detailed optical spectra. Of these, more than 10 million data points have been annotated for habitat descriptors or taxonomic identity with a total of 47 class labels up to genus- and species-levels. In addition to HyperDiver survey data, we also include images and annotations from traditional (color photo) quadrat surveys conducted along 23 of the 147 transects, which enables comparative reef description between two types of reef survey methods. This dataset promises benefits for efforts in classification algorithms, hyperspectral image segmentation and automated habitat mapping.
APA, Harvard, Vancouver, ISO, and other styles
41

Andrade, R. B., J. M. F. Santos, G. A. O. P. Costa, G. L. A. Mota, P. N. Happ, and R. Q. Feitosa. "A COMPARISON BETWEEN THE HADOOP AND SPARK DISTRIBUTED FRAMEWORKS IN THE CONTEXT OF REGION-GROWING SEGMENTATION OF REMOTE SENSING IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W7 (September 16, 2019): 3–8. http://dx.doi.org/10.5194/isprs-annals-iv-2-w7-3-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> This work follows a line of research dedicated to the parallelization of image segmentation algorithms on distributed computing environments, which is motivated by the increasing resolutions and availability of Remote Sensing (RS) images. Here we focus on region-growing segmentation, which is regarded as a time consuming and demanding approach in terms of computational resources. Its parallelization is a complex problem since it usually affects the final outcome in comparison to what would be delivered by a sequential solution. This is due to the fact that subdividing an image to perform segmentation of its tiles concurrently usually introduces undesirable artifacts near to the borders of the image tiles. Additional processing steps are then required to properly stitch together the segments alongside tiles borders in order to eliminate such artifacts. In this work we evaluated alternative implementations of a previously proposed region-growing distributed segmentation approach, which was originally built on top of the Hadoop distributed computing framework. We developed a new implementation of the approach, which was built with the Spark framework, and compared its performance with that of the original implementation. In this investigation RS images of various sizes were processed using different configurations of a physical computer cluster. We evaluated computational performances and accessed the differences among the segmentation outcomes generated by the alternative implementations. We also assessed the stability of the implementations by comparing the segmentations produced with different cluster configurations. Although the approach is, in principle, suitable to any region growing algorithm, the experiments were performed with a particular segmentation method, and the results showed that the Spark implementation consistently outperformed the Hadoop counterpart, bringing in most cases a significant improvement in terms of processing time. The experiment results also attested the stability of the distributed segmentation approach, as very similar results were produced with the alternative implementations, running on different cluster configurations.</p>
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Lei, Yue Cheng, and Zhengjun Zhai. "Real-time Accurate Runway Detection based on Airborne Multi-sensors Fusion." Defence Science Journal 67, no. 5 (September 19, 2017): 542. http://dx.doi.org/10.14429/dsj.67.10439.

Full text
Abstract:
<p>Existing methods of runway detection are more focused on image processing for remote sensing images based on computer vision techniques. However, these algorithms are too complicated and time-consuming to meet the demand for real-time airborne application. This paper proposes a novel runway detection method based on airborne multi-sensors data fusion which works in a coarse-to-fine hierarchical architecture. At the coarse layer, a vision projection model from world coordinate system to image coordinate system is built by fusing airborne navigation data and forward-looking sensing images, then a runway region of interest (ROI) is extracted from a whole image by the model. Furthermore, EDLines which is a real-time line segments detector is applied to extract straight line segments from ROI at the fine layer, and fragmented line segments generated by EDLines are linked into two long runway lines. Finally, some unique runway features (e.g. vanishing point and runway direction) are used to recognise airport runway. The proposed method is tested on an image dataset provided by a flight simulation system. The experimental results show that the method has advantages in terms of speed, recognition rate and false alarm rate.</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Musunuri, Yogendra Rao, and Oh-Seol Kwon. "Haze Removal Based on Refined Transmission Map for Aerial Image Matching." Applied Sciences 11, no. 15 (July 27, 2021): 6917. http://dx.doi.org/10.3390/app11156917.

Full text
Abstract:
A novel strategy is proposed to address block artifacts in a conventional dark channel prior (DCP). The DCP was used to estimate the transmission map based on patch-based processing, which also results in image blurring. To enhance a degraded image, the proposed single-image dehazing technique restores a blurred image with a refined DCP based on a hidden Markov random field. Therefore, the proposed algorithm estimates a refined transmission map that can reduce the block artifacts and improve the image clarity without explicit guided filters. Experiments were performed on the remote-sensing images. The results confirm that the proposed algorithm is superior to the conventional approaches to image haze removal. Moreover, the proposed algorithm is suitable for image matching based on local feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhao, Quanhua, Shuhan Jia, and Yu Li. "Hyperspectral remote sensing image classification based on tighter random projection with minimal intra-class variance algorithm." Pattern Recognition 111 (March 2021): 107635. http://dx.doi.org/10.1016/j.patcog.2020.107635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Min, and Simone A. Ludwig. "Color Image Segmentation Using Fuzzy C-Regression Model." Advances in Fuzzy Systems 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/4582948.

Full text
Abstract:
Image segmentation is one important process in image analysis and computer vision and is a valuable tool that can be applied in fields of image processing, health care, remote sensing, and traffic image detection. Given the lack of prior knowledge of the ground truth, unsupervised learning techniques like clustering have been largely adopted. Fuzzy clustering has been widely studied and successfully applied in image segmentation. In situations such as limited spatial resolution, poor contrast, overlapping intensities, and noise and intensity inhomogeneities, fuzzy clustering can retain much more information than the hard clustering technique. Most fuzzy clustering algorithms have originated from fuzzy c-means (FCM) and have been successfully applied in image segmentation. However, the cluster prototype of the FCM method is hyperspherical or hyperellipsoidal. FCM may not provide the accurate partition in situations where data consists of arbitrary shapes. Therefore, a Fuzzy C-Regression Model (FCRM) using spatial information has been proposed whose prototype is hyperplaned and can be either linear or nonlinear allowing for better cluster partitioning. Thus, this paper implements FCRM and applies the algorithm to color segmentation using Berkeley’s segmentation database. The results show that FCRM obtains more accurate results compared to other fuzzy clustering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
46

Tan, Qulin, Bin Guo, Jun Hu, Xiaofeng Dong, and Jiping Hu. "Object-oriented remote sensing image information extraction method based on multi-classifier combination and deep learning algorithm." Pattern Recognition Letters 141 (January 2021): 32–36. http://dx.doi.org/10.1016/j.patrec.2020.08.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ma, Andong, and Anthony M. Filippi. "A novel spatial recurrent neural network for hyperspectral imagery classification." Abstracts of the ICA 1 (July 15, 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-233-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Hyperspectral images (HSIs) contain hundreds of spectral bands, providing high-resolution spectral information pertaining to the Earth’s surface. Additionally, abundant spatial contextual information can also be obtained simultaneously from a HSI. To characterize the properties of ground objects, classification is the most widely-used technology in the field of remote sensing, where each pixel in a HSI is assigned to a pre-defined class. Over the past decade, deep learning has attracted increasing attention in the machine-learning and computer-vision domains, due to its favourable performances for various types of tasks, and it has been successfully introduced to the remote-sensing community. Instead of utilizing the shallow features within in a given image, which is the approach that is generally adopted in other conventional classification methods, deep-learning algorithms can extract hierarchical features from raw HSI data. Within the deep-learning framework, recurrent neural networks (RNNs), which are able to encode sequential features, have exhibited promising capabilities and have achieved encouraging performances, especially for the natural-language processing and speech-recognition communities. As multi-temporal remote-sensing images can be readily obtained from increasing numbers of satellite and unmanned aircraft systems, and since analysis of such multi-temporal data comprises a critical issue within numerous research subfields, including land-cover and land-change analyses, and land-resource management, RNNs have been applied in recent studies in order to extract temporal sequential features from multi-temporal remote-sensing images for the purpose of image classification. Apart from using multi-temporal image datasets, RNNs can also be utilized on a single image, where the spectral feature/band of each individual pixel can be taken as a sequential feature for the input layer of RNNs. However, the application of such sequential feature extraction that relies on a single image still needs to be further investigated since applying RNNs to spectral bands will directly introduce more parameters that need to be optimized, consequently increasing the total training time.</p><p>In this study, we propose a novel RNN-based HSI classification framework. In this framework, unlabelled pixels obtained from a single image are considered when constructing sequential features. Two spatial similarity measurements, referred to as pixel-matching and block-matching, respectively, are employed to extract pixels that are “similar” to the target pixel. Then, the sequential feature of the target pixel is constructed by exploiting several of the most “similar” pixels and ordering them based on their similarities to the target pixel. The aforementioned two schemes are advantageous, as unlabelled pixels within the given HSI are taken into consideration for similarity measurement and sequential feature construction for the RNN model. Moreover, the block-matching scheme also takes advantage of spatial contextual information, which has been widely utilized in spatial-spectral-based HSI classification methods. To evaluate the proposed methods, two benchmark HSIs are used, including a HSI collected over Pavia University, Italy by the airborne Reflective Optics System Imaging Spectrometer (ROSIS) sensor, and an image acquired over the Salinas Valley, California, USA via the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. Spatio-temporally coincident ground-reference data accompanies each of these respective HSIs. In addition, the proposed methods are compared with three state-of-the-art algorithms, including support vector machine (SVM), the 1-dimensional convolutional neural network (1DCNN), and the 1-dimensional RNN (1DRNN).</p><p>Experimental results indicate that our proposed methods achieve markedly better classification performance compared with the baseline algorithms on both datasets. For example, for the Pavia University image, the block-matching based RNN achieves the highest overall classification accuracy, with 94.32% accuracy, which is 9.87% higher than the next most accurate algorithm of the aforementioned three baseline methods, which in this case is the 1DCNN, with 84.45% overall accuracy. More specifically, the block-matching method performs better than the pixel-matching method in terms of both quantitative and qualitative assessments. Based on visual assessment/interpretation of the classification maps, it is apparent that “salt-and-pepper” noise is markedly alleviated; with block-matching, smoother classified images are generated compared with pixel-matching-based methods and the three baseline algorithms. Such results demonstrate the effectiveness of utilizing spatial contextual information in the similarity measurement.</p>
APA, Harvard, Vancouver, ISO, and other styles
48

Balsi, Marco, Monica Moroni, Valter Chiarabini, and Giovanni Tanda. "High-Resolution Aerial Detection of Marine Plastic Litter by Hyperspectral Sensing." Remote Sensing 13, no. 8 (April 16, 2021): 1557. http://dx.doi.org/10.3390/rs13081557.

Full text
Abstract:
An automatic custom-made procedure is developed to identify macroplastic debris loads in coastal and marine environment, through hyperspectral imaging from unmanned aerial vehicles (UAVs). Results obtained during a remote-sensing field campaign carried out in the seashore of Sassari (Sardinia, Italy) are presented. A push-broom-sensor-based spectral device, carried onboard a DJI Matrice 600 drone, was employed for the acquisition of spectral data in the range 900−1700 nm. The hyperspectral platform was realized by assembling commercial devices, whereas algorithms for mosaicking, post-flight georeferencing, and orthorectification of the acquired images were developed in-house. Generation of the hyperspectral cube was based on mosaicking visible-spectrum images acquired synchronously with the hyperspectral lines, by performing correlation-based registration and applying the same translations, rotations, and scale changes to the hyperspectral data. Plastics detection was based on statistically relevant feature selection and Linear Discriminant Analysis, trained on a manually labeled sample. The results obtained from the inspection of either the beach site or the sea water facing the beach clearly show the successful separate identification of polyethylene (PE) and polyethylene terephthalate (PET) objects through the post-processing data treatment based on the developed classifier algorithm. As a further implementation of the procedure described, direct real-time processing, by an embedded computer carried onboard the drone, permitted the immediate plastics identification (and visual inspection in synchronized images) during the UAV survey, as documented by short video sequences provided in this research paper.
APA, Harvard, Vancouver, ISO, and other styles
49

Donaldson, Dave, and Adam Storeygard. "The View from Above: Applications of Satellite Data in Economics." Journal of Economic Perspectives 30, no. 4 (November 1, 2016): 171–98. http://dx.doi.org/10.1257/jep.30.4.171.

Full text
Abstract:
The past decade or so has seen a dramatic change in the way that economists can learn by watching our planet from above. A revolution has taken place in remote sensing and allied fields such as computer science, engineering, and geography. Petabytes of satellite imagery have become publicly accessible at increasing resolution, many algorithms for extracting meaningful social science information from these images are now routine, and modern cloud-based processing power allows these algorithms to be run at global scale. This paper seeks to introduce economists to the science of remotely sensed data, and to give a flavor of how this new source of data has been used by economists so far and what might be done in the future.
APA, Harvard, Vancouver, ISO, and other styles
50

Qu, Zhi, Yaqiong Xing, and Yafei Song. "An Image Enhancement Method Based on Non-Subsampled Shearlet Transform and Directional Information Measurement." Information 9, no. 12 (December 6, 2018): 308. http://dx.doi.org/10.3390/info9120308.

Full text
Abstract:
Based on the advantages of a non-subsampled shearlet transform (NSST) in image processing and the characteristics of remote sensing imagery, NSST was applied to enhance blurred images. In the NSST transform domain, directional information measurement can highlight textural features of an image edge and reduce image noise. Therefore, NSST was applied to the detailed enhancement of high-frequency sub-band coefficients. Based on the characteristics of a low-frequency image, the retinex method was used to enhance low-frequency images. Then, an NSST inverse transformation was performed on the enhanced low- and high-frequency coefficients to obtain an enhanced image. Computer simulation experiments showed that when compared with a traditional image enhancement strategy, the method proposed in this paper can enrich the details of the image and enhance the visual effect of the image. Compared with other algorithms listed in this paper, the brightness, contrast, edge strength, and information entropy of the enhanced image by this method are improved. In addition, in the experiment of noisy images, various objective evaluation indices show that the method in this paper enhances the image with the least noise information, which further indicates that the method can suppress noise while improving the image quality, and has a certain level of effectiveness and practicability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography