To see the other types of publications on this topic, follow the link: Computer vision Aerial photography.

Journal articles on the topic 'Computer vision Aerial photography'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer vision Aerial photography.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lopatin, Yaroslav, and Wilhelm Heger. "GEODESY, CARTOGRAPHY, AND AERIAL PHOTOGRAPHY." GEODESY, CARTOGRAPHY, AND AERIAL PHOTOGRAPHY 93,2021, no. 93 (June 23, 2021): 5–12. http://dx.doi.org/10.23939/istcgcap2021.93.005.

Full text
Abstract:
The aim of the work is to develop an automated measuring system in a mechanical gyrocompass with the help of specially developed hardware and software in order to facilitate the operation of the device and minimize observer errors. The developed complex provides automation only for the time method, as for the method of the turning point it is necessary to constantly contact the motion screw of the total station. The project is based on an integrated system, the hardware part of which contains a single-board computer, camera, and lens. The main software is a developed motion recognition algorithm with the help of image processing. This algorithm was created using the Python programming language and the open-source computer vision library OpenCV. With the help of the hardware, a video image of the gyroscope's reference scale is obtained, and with the help of the software, the moving light indicator and its position relative to the scale are identified in this image. The result of the study is a functioning automatic measurement system, which determines the value of the azimuth of the direction with the same accuracy as manual measurements. The system is controlled remotely via a computer and wi-fi network. To test the system, a series of automatic and manual measurements were performed simultaneously at the same point for the same direction. Based on the results obtained, it can be stated that the accuracy of the system is within the limits specified by the manufacturer of the device for manual measurements. The application of computer vision technology, namely the tracking of a moving object in the image for gyroscopic measurements can give a significant impetus to the development of automation systems for a wide range of measuring instruments, which in turn can improve the accuracy of measurement results. The developed system can be used together with the Gyromax AK-2M gyrocompass of GeoMessTechnik for carrying out automated measurements, training of new operators. With the help of the developed model, it is possible to avoid gross errors of the observer, to facilitate the measurement process which will not demand the constant presence of the operator near the device. In some dangerous conditions, this is a significant advantage.
APA, Harvard, Vancouver, ISO, and other styles
2

Verhoeven, Geert. "Taking computer vision aloft - archaeological three-dimensional reconstructions from aerial photographs with photoscan." Archaeological Prospection 18, no. 1 (January 2011): 67–73. http://dx.doi.org/10.1002/arp.399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jacob-Loyola, Nicolás, Felipe Muñoz-La Rivera, Rodrigo F. Herrera, and Edison Atencio. "Unmanned Aerial Vehicles (UAVs) for Physical Progress Monitoring of Construction." Sensors 21, no. 12 (June 20, 2021): 4227. http://dx.doi.org/10.3390/s21124227.

Full text
Abstract:
The physical progress of a construction project is monitored by an inspector responsible for verifying and backing up progress information, usually through site photography. Progress monitoring has improved, thanks to advances in image acquisition, computer vision, and the development of unmanned aerial vehicles (UAVs). However, no comprehensive and simple methodology exists to guide practitioners and facilitate the use of these methods. This research provides recommendations for the periodic recording of the physical progress of a construction site through the manual operation of UAVs and the use of point clouds obtained under photogrammetric techniques. The programmed progress is then compared with the actual progress made in a 4D BIM environment. This methodology was applied in the construction of a reinforced concrete residential building. The results showed the methodology is effective for UAV operation in the work site and the use of the photogrammetric visual records for the monitoring of the physical progress and the communication of the work performed to the project stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
4

Park, J. W., H. H. Jeong, J. S. Kim, and C. U. Choi. "Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 941–44. http://dx.doi.org/10.5194/isprsarchives-xli-b7-941-2016.

Full text
Abstract:
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
APA, Harvard, Vancouver, ISO, and other styles
5

Park, J. W., H. H. Jeong, J. S. Kim, and C. U. Choi. "Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 941–44. http://dx.doi.org/10.5194/isprs-archives-xli-b7-941-2016.

Full text
Abstract:
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
APA, Harvard, Vancouver, ISO, and other styles
6

Shu, Joseph Shou-Pyng, and Herbert Freeman. "Cloud shadow removal from aerial photographs." Pattern Recognition 23, no. 6 (January 1990): 647–56. http://dx.doi.org/10.1016/0031-3203(90)90040-r.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Yu, Chunxue Wu, Qunhui Wu, Zelda Makati Eli, Naixue Xiong, and Sheng Zhang. "Design and Analysis of Refined Inspection of Field Conditions of Oilfield Pumping Wells Based on Rotorcraft UAV Technology." Electronics 8, no. 12 (December 9, 2019): 1504. http://dx.doi.org/10.3390/electronics8121504.

Full text
Abstract:
The traditional oil well monitoring method relies on manual acquisition and various high-precision sensors. Using the indicator diagram to judge the working condition of the well is not only difficult to establish but also consumes huge manpower and financial resources. This paper proposes the use of computer vision in the detection of working conditions in oil extraction. Combined with the advantages of an unmanned aerial vehicle (UAV), UAV aerial photography images are used to realize real-time detection of on-site working conditions by real-time tracking of the working status of the head working and other related parts of the pumping unit. Considering the real-time performance of working condition detection, this paper proposes a framework that combines You only look once version 3 (YOLOv3) and a sort algorithm to complete multi-target tracking in the form of tracking by detection. The quality of the target detection in the framework is the key factor affecting the tracking effect. The experimental results show that a good detector makes the tracking speed achieve the real-time effect and provides help for the real-time detection of the working condition, which has a strong practical application.
APA, Harvard, Vancouver, ISO, and other styles
8

Marchewka, Adam, Patryk Ziółkowski, and Victor Aguilar-Vidal. "Framework for Structural Health Monitoring of Steel Bridges by Computer Vision." Sensors 20, no. 3 (January 27, 2020): 700. http://dx.doi.org/10.3390/s20030700.

Full text
Abstract:
The monitoring of a structural condition of steel bridges is an important issue. Good condition of infrastructure facilities ensures the safety and economic well-being of society. At the same time, due to the continuous development, rising wealth of the society and socio-economic integration of countries, the number of infrastructural objects is growing. Therefore, there is a need to introduce an easy-to-use and relatively low-cost method of bridge diagnostics. We can achieve these benefits by the use of Unmanned Aerial Vehicle-Based Remote Sensing and Digital Image Processing. In our study, we present a state-of-the-art framework for Structural Health Monitoring of steel bridges that involves literature review on steel bridges health monitoring, drone route planning, image acquisition, identification of visual markers that may indicate a poor condition of the structure and determining the scope of applicability. The presented framework of image processing procedure is suitable for diagnostics of steel truss riveted bridges. In our considerations, we used photographic documentation of the Fitzpatrick Bridge located in Tallassee, Alabama, USA.
APA, Harvard, Vancouver, ISO, and other styles
9

Verhoeven, G., M. Doneus, Ch Briese, and F. Vermeulen. "Mapping by matching: a computer vision-based approach to fast and accurate georeferencing of archaeological aerial photographs." Journal of Archaeological Science 39, no. 7 (July 2012): 2060–70. http://dx.doi.org/10.1016/j.jas.2012.02.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fuentes, Sigfredo, Gabriela Chacon, Damir D. Torrico, Andrea Zarate, and Claudia Gonzalez Viejo. "Spatial Variability of Aroma Profiles of Cocoa Trees Obtained through Computer Vision and Machine Learning Modelling: A Cover Photography and High Spatial Remote Sensing Application." Sensors 19, no. 14 (July 11, 2019): 3054. http://dx.doi.org/10.3390/s19143054.

Full text
Abstract:
Cocoa is an important commodity crop, not only to produce chocolate, one of the most complex products from the sensory perspective, but one that commonly grows in developing countries close to the tropics. This paper presents novel techniques applied using cover photography and a novel computer application (VitiCanopy) to assess the canopy architecture of cocoa trees in a commercial plantation in Queensland, Australia. From the cocoa trees monitored, pod samples were collected, fermented, dried, and ground to obtain the aroma profile per tree using gas chromatography. The canopy architecture data were used as inputs in an artificial neural network (ANN) algorithm, with the aroma profile, considering six main aromas, as targets. The ANN model rendered high accuracy (correlation coefficient (R) = 0.82; mean squared error (MSE) = 0.09) with no overfitting. The model was then applied to an aerial image of the whole cocoa field studied to produce canopy vigor, and aroma profile maps up to the tree-by-tree scale. The tool developed could significantly aid the canopy management practices in cocoa trees, which have a direct effect on cocoa quality.
APA, Harvard, Vancouver, ISO, and other styles
11

Larsen, Morten, and Mats Rudemo. "Optimizing templates for finding trees in aerial photographs." Pattern Recognition Letters 19, no. 12 (October 1998): 1153–62. http://dx.doi.org/10.1016/s0167-8655(98)00092-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Chengye, Liuqing Huang, and Azriel Rosenfeld. "Detecting clouds and cloud shadows on aerial photographs." Pattern Recognition Letters 12, no. 1 (January 1991): 55–64. http://dx.doi.org/10.1016/0167-8655(91)90028-k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, H. R., and Y. H. Tseng. "STUDY OF AUTOMATIC IMAGE RECTIFICATION AND REGISTRATION OF SCANNED HISTORICAL AERIAL PHOTOGRAPHS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (June 24, 2016): 1229–36. http://dx.doi.org/10.5194/isprsarchives-xli-b8-1229-2016.

Full text
Abstract:
Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, H. R., and Y. H. Tseng. "STUDY OF AUTOMATIC IMAGE RECTIFICATION AND REGISTRATION OF SCANNED HISTORICAL AERIAL PHOTOGRAPHS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (June 24, 2016): 1229–36. http://dx.doi.org/10.5194/isprs-archives-xli-b8-1229-2016.

Full text
Abstract:
Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.
APA, Harvard, Vancouver, ISO, and other styles
15

Nakano, T., I. Kamiya, M. Tobita, J. Iwahashi, and H. Nakajima. "Landform monitoring in active volcano by UAV and SfM-MVS technique." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (November 27, 2014): 71–75. http://dx.doi.org/10.5194/isprsarchives-xl-8-71-2014.

Full text
Abstract:
Nishinoshima volcano in Ogasawara Islands has erupted since November, 2013. This volcanic eruption formed and enlarged a new island, and fused the new island with the old Nishinoshima Island. We performed automated aerial photographing using an Unmanned Aerial Vehicle (UAV) over the joined Nishinoshima Island on March 22 and July 4, 2014. We produced ortho-mosaic photos and digital elevation model (DEM) data by new photogrammetry software with computer vision technique, i.e. Structure from Motion (SfM) for estimating the photographic position of the camera and Multi-view Stereo (MVS) for generating the 3-D model. We also estimated the area and volume of the new island via analysis of ortho-mosaic photo and DEM data. Transition of volume estimated from the UAV photographing and other photographing shows the volcanic activity still keeps from initial level. The ortho-mosaic photos and DEM data were utilized to create an aerial photo interpretation map and a 3-D map. These operations revealed new knowledge and problems to be solved on the photographing and analysis using UAV and new techniques as this was first case in some respects.
APA, Harvard, Vancouver, ISO, and other styles
16

Lee, Hsi-Jian, and Wen-Ling Lei. "Region matching and depth finding for 3D objects in stereo aerial photographs." Pattern Recognition 23, no. 1-2 (January 1990): 81–94. http://dx.doi.org/10.1016/0031-3203(90)90050-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Junshu, Yue Yang, Yuan Chen, and Yuxing Han. "LighterGAN: An Illumination Enhancement Method for Urban UAV Imagery." Remote Sensing 13, no. 7 (April 2, 2021): 1371. http://dx.doi.org/10.3390/rs13071371.

Full text
Abstract:
In unmanned aerial vehicle based urban observation and monitoring, the performance of computer vision algorithms is inevitably limited by the low illumination and light pollution caused degradation, therefore, the application image enhancement is a considerable prerequisite for the performance of subsequent image processing algorithms. Therefore, we proposed a deep learning and generative adversarial network based model for UAV low illumination image enhancement, named LighterGAN. The design of LighterGAN refers to the CycleGAN model with two improvements—attention mechanism and semantic consistency loss—having been proposed to the original structure. Additionally, an unpaired dataset that was captured by urban UAV aerial photography has been used to train this unsupervised learning model. Furthermore, in order to explore the advantages of the improvements, both the performance in the illumination enhancement task and the generalization ability improvement of LighterGAN were proven in the comparative experiments combining subjective and objective evaluations. In the experiments with five cutting edge image enhancement algorithms, in the test set, LighterGAN achieved the best results in both visual perception and PIQE (perception based image quality evaluator, a MATLAB build-in function, the lower the score, the higher the image quality) score of enhanced images, scores were 4.91 and 11.75 respectively, better than EnlightenGAN the state-of-the-art. In the enhancement of low illumination sub-dataset Y (containing 2000 images), LighterGAN also achieved the lowest PIQE score of 12.37, 2.85 points lower than second place. Moreover, compared with the CycleGAN, the improvement of generalization ability was also demonstrated. In the test set generated images, LighterGAN was 6.66 percent higher than CycleGAN in subjective authenticity assessment and 3.84 lower in PIQE score, meanwhile, in the whole dataset generated images, the PIQE score of LighterGAN is 11.67, 4.86 lower than CycleGAN.
APA, Harvard, Vancouver, ISO, and other styles
18

Eltner, Anette, Andreas Kaiser, Carlos Castillo, Gilles Rock, Fabian Neugirg, and Antonio Abellán. "Image-based surface reconstruction in geomorphometry – merits, limits and developments." Earth Surface Dynamics 4, no. 2 (May 19, 2016): 359–89. http://dx.doi.org/10.5194/esurf-4-359-2016.

Full text
Abstract:
Abstract. Photogrammetry and geosciences have been closely linked since the late 19th century due to the acquisition of high-quality 3-D data sets of the environment, but it has so far been restricted to a limited range of remote sensing specialists because of the considerable cost of metric systems for the acquisition and treatment of airborne imagery. Today, a wide range of commercial and open-source software tools enable the generation of 3-D and 4-D models of complex geomorphological features by geoscientists and other non-experts users. In addition, very recent rapid developments in unmanned aerial vehicle (UAV) technology allow for the flexible generation of high-quality aerial surveying and ortho-photography at a relatively low cost.The increasing computing capabilities during the last decade, together with the development of high-performance digital sensors and the important software innovations developed by computer-based vision and visual perception research fields, have extended the rigorous processing of stereoscopic image data to a 3-D point cloud generation from a series of non-calibrated images. Structure-from-motion (SfM) workflows are based upon algorithms for efficient and automatic orientation of large image sets without further data acquisition information, examples including robust feature detectors like the scale-invariant feature transform for 2-D imagery. Nevertheless, the importance of carrying out well-established fieldwork strategies, using proper camera settings, ground control points and ground truth for understanding the different sources of errors, still needs to be adapted in the common scientific practice.This review intends not only to summarise the current state of the art on using SfM workflows in geomorphometry but also to give an overview of terms and fields of application. Furthermore, this article aims to quantify already achieved accuracies and used scales, using different strategies in order to evaluate possible stagnations of current developments and to identify key future challenges. It is our belief that some lessons learned from former articles, scientific reports and book chapters concerning the identification of common errors or "bad practices" and some other valuable information may help in guiding the future use of SfM photogrammetry in geosciences.
APA, Harvard, Vancouver, ISO, and other styles
19

Pires de Lima, Rafael, and Kurt Marfurt. "Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis." Remote Sensing 12, no. 1 (December 25, 2019): 86. http://dx.doi.org/10.3390/rs12010086.

Full text
Abstract:
Remote-sensing image scene classification can provide significant value, ranging from forest fire monitoring to land-use and land-cover classification. Beginning with the first aerial photographs of the early 20th century to the satellite imagery of today, the amount of remote-sensing data has increased geometrically with a higher resolution. The need to analyze these modern digital data motivated research to accelerate remote-sensing image classification. Fortunately, great advances have been made by the computer vision community to classify natural images or photographs taken with an ordinary camera. Natural image datasets can range up to millions of samples and are, therefore, amenable to deep-learning techniques. Many fields of science, remote sensing included, were able to exploit the success of natural image classification by convolutional neural network models using a technique commonly called transfer learning. We provide a systematic review of transfer learning application for scene classification using different datasets and different deep-learning models. We evaluate how the specialization of convolutional neural network models affects the transfer learning process by splitting original models in different points. As expected, we find the choice of hyperparameters used to train the model has a significant influence on the final performance of the models. Curiously, we find transfer learning from models trained on larger, more generic natural images datasets outperformed transfer learning from models trained directly on smaller remotely sensed datasets. Nonetheless, results show that transfer learning provides a powerful tool for remote-sensing scene classification.
APA, Harvard, Vancouver, ISO, and other styles
20

Rosalina, Elin ,., and Soffiana Agustin. "KLASIFIKASI UMUR LAHAN PERKEBUNAN KELAPA SAWIT PADA CITRA FOTO UDARA BERDASARKAN TEKSTUR MENGGUNAKAN METODE NAÏVE BAYES." INDEXIA : Infomatic and Computational Intelligent Journal 1, no. 1 (April 1, 2019): 6. http://dx.doi.org/10.30587/indexia.v1i1.820.

Full text
Abstract:
Abstract Developments and advancements in the field of Technology and Information have a considerable influence in the world of image analysis. At present, the process of image manipulation is easier to do, one of the factors in the emergence of various methods in image segmentation. Image segmentation is the first step in doing image processing, pattern recognition, computer vision, because most image processing processes depend on the results of the enhancement operation or image repair process. This final project will be implemented in the process of determining the type of oil palm plantation land using the Naïve Bayes method. The repair process starts from the RGB image to Greyscale, then proceed to the histogram equalization process, then proceed with the inverse image process. The feature extraction process is carried out after image repair operations using the co-occurrence matrix method. The extraction process of the co-occurrence matrix features 6 features, namely angular second moment value, contrast, correlation, varience, inverse different moment, and entropy. The Naïve Bayes process is one process for classifying a class data. There are four classes used in this system test, namely Young Palm Oil, Mature Palm Oil, and Old Palm Oil. Class determination is based on the largest value as the appropriate class. Based on the above objectives, a system can be created using the Matlab R2011b application program. The computation is done by using image images of various types of oil palm trees on plantations in Kalimantan which are taken from aerial photographs which are then cropped to be sampled with a pixel size of 60X60 in 400 images.
APA, Harvard, Vancouver, ISO, and other styles
21

Florence, Laporterie, Flouzat Guy, and Amram Olivier. "THE MORPHOLOGICAL PYRAMID AND ITS APPLICATIONS TO REMOTE SENSING: MULTIRESOLUTION DATA ANALYSIS AND FEATURES EXTRACTION." Image Analysis & Stereology 21, no. 1 (May 3, 2011): 49. http://dx.doi.org/10.5566/ias.v21.p49-53.

Full text
Abstract:
In remote sensing, sensors are more and more numerous, and their spatial resolution is higher and higher. Thus, the availability of a quick and accurate characterisation of the increasing amount of data is now a quite important issue. This paper deals with an approach combining a pyramidal algorithm and mathematical morphology to study the physiographic characteristics of terrestrial ecosystems. Our pyramidal strategy involves first morphological filters, then extraction at each level of resolution of well-known landscapes features. The approach is applied to a digitised aerial photograph representing an heterogeneous landscape of orchards and forests along the Garonne river (France). This example, simulating very high spatial resolution imagery, highlights the influence of the parameters of the pyramid according to the spatial properties of the studied patterns. It is shown that, the morphological pyramid approach is a promising attempt for multi-level features extraction by modelling geometrical relevant parameters.
APA, Harvard, Vancouver, ISO, and other styles
22

Singh, S. P., K. Jain, and V. R. Mandla. "A new approach towards image based virtual 3D city modeling by using close range photogrammetry." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5 (May 28, 2014): 329–37. http://dx.doi.org/10.5194/isprsannals-ii-5-329-2014.

Full text
Abstract:
3D city model is a digital representation of the Earth's surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations <br><br> This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. <br><br> Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.
APA, Harvard, Vancouver, ISO, and other styles
23

Mauelshagen, L. "LOW ALTITUDE AERIAL PHOTOGRAPHY." Photogrammetric Record 12, no. 68 (August 26, 2006): 239–41. http://dx.doi.org/10.1111/j.1477-9730.1986.tb00561.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rieke-Zapp, Dirk. "Small-Format Aerial Photography." Photogrammetric Record 26, no. 134 (June 2011): 277. http://dx.doi.org/10.1111/j.1477-9730.2011.00637_2.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sims, W. G., and M. L. Benson. "Mapping from Colour Aerial Photography." Photogrammetric Record 6, no. 33 (August 26, 2006): 321–24. http://dx.doi.org/10.1111/j.1477-9730.1969.tb00945.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Woodrow, H. C. "Mapping from Colour Aerial Photography." Photogrammetric Record 6, no. 34 (August 26, 2006): 408. http://dx.doi.org/10.1111/j.1477-9730.1969.tb00959.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wallington, E. D. "Aerial photography and image interpretation." Photogrammetric Record 19, no. 108 (December 2004): 420–22. http://dx.doi.org/10.1111/j.0031-868x.2004.295_6.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Robertson, V. C. "AERIAL PHOTOGRAPHY AND PROPER LAND UTILISATION." Photogrammetric Record 1, no. 6 (August 26, 2006): 5–12. http://dx.doi.org/10.1111/j.1477-9730.1955.tb01034.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Welch, R., and J. Halliday. "IMAGE QUALITY CONTROLS FOR AERIAL PHOTOGRAPHY†." Photogrammetric Record 8, no. 45 (August 26, 2006): 317–25. http://dx.doi.org/10.1111/j.1477-9730.1975.tb00059.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Dorozhynskyy, O. L. ,., I. Z. Kolb, L. V. Babiy, and L. V. Dychko. "GEODESY, CARTOGRAPHY AND AERIAL PHOTOGRAPHY." GEODESY, CARTOGRAPHY AND AERIAL PHOTOGRAPHY 92,2020, no. 92 (December 24, 2020): 15–23. http://dx.doi.org/10.23939/istcgcap2020.92.015.

Full text
Abstract:
Aim. Determination of the elements of external spatial orientation of the surveying systems at the moment of image acquisition is the fundamental task in photogrammetry. Principally, this problem is solving in two ways. The first way is direct positioning and measuring of directions of camera optical axis in the geodetic space with the help of GNSS/INS equipment. The second way is the analytical solution of the problem using a set of reference information (often such information is a set of ground control points whose geodetic positions are known with sufficient accuracy and which are reliably recognised on aerial images of the photogrammetric block). The authors consider the task of providing reference and control information using the second approach, which has a number of advantages in terms of reliability and accuracy of determining the unknown image exterior orientation parameters. It is proposed to obtain additional images of ground control points by the method of their auxiliary aerial photography using an unmanned aerial vehicle (UAV) on a larger scale compared to the scale of the images of the photogrammetric block. The aim of the presented work is the implementation of the method of creating reference points and experimental confirmation of its effectiveness for photogrammetric processing. Methods and results. For the entire realization of the potential of the analytical way to determine the elements of external orientation of images, it is necessary to have a certain number of ground control points (GCP) and to keep the defined scheme of their location on the photogrammetric block. As the main source of input data authors use UAV aerial images of the terrain, which are obtained separately from the block of aerial survey, and have a better geometric resolution and which clearly depict the control reference points. Application of such auxiliary images gives the possibility of automated transferring of the position of ground control point into images of the main photogrammetric block. In our interpretation, these images of ground control points and their surroundings on the ground are called "control reference images". The basis of the work is to develop a method for obtaining the auxiliary control reference images and transferring of position of GCP depicted on them into aerial or space images of terrain by means of computer stereo matching. To achieve this goal, we have developed a processing method for the creation of control reference images of aerial image or a series of auxiliary multi-scale aerial images obtained by a drone from different heights above the reference point. The operator identifies and measures the GCP once on the auxiliary aerial image of the highest resolution. Then there is an automatic stereo matching of the control reference image in the whole series of auxiliary images in succession with a decrease in the resolution and, ultimately, directly with the aerial images of photogrammetric block. On this stage there are no recognition/cursor targeting by the human operator, and therefore there are no discrepancies, errors or mistakes related to it. In addition, if to apply fairly large size of control reference images, the proposed method can be used on a low-texture terrain, and therefore deal in many cases without the physical marking of points measured by GNSS method. And this is a way to simplify and reduce the cost of photogrammetric technology. The action of the developed method has been verified experimentally to provide the control reference information of the block of archival aerial images of the low-texture terrain. The results of the experimental approbation of the proposed method give grounds to assert that the method makes it possible to perform geodetic reference of photogrammetric projects more efficiently due to the refusal of the physical marking of the area before aerial survey. The proposed method can also be used to obtain the information for checking the quality of photogrammetric survey for provision of check points. The authors argue that the use of additional equipment - UAV of semi-professional class to obtain control reference images is economically feasible. Scientific novelty and practical relevance. The results of approbation of the "control reference image" method with obtaining stereo pairs of aerial images with vertical placement of the base are presented for the first time. There was implemented the study of the properties of such stereo pairs of aerial images to obtain images of reference points. The effectiveness of including reference images in the main block of the digital aerial triangulation network created on UAV’s images is shown.
APA, Harvard, Vancouver, ISO, and other styles
31

Wickstead, Helen, and Martyn Barber. "A Spectacular History of Survey by Flying Machine!" Cambridge Archaeological Journal 22, no. 1 (February 2012): 71–88. http://dx.doi.org/10.1017/s0959774312000054.

Full text
Abstract:
The origins of archaeological methods are often surprising, revealing unexpected connections between science, art and entertainment. This article explores aerial survey, a visual method commonly represented as distancing or objective. We show how aerial survey's visualizing practices embody subjective notions of vision emerging throughout the nineteenth century. Aerial survey smashes linear perspective, fragments time-space, and places radical doubt at the root of claims to truth. Its techniques involve hallucination, and its affinities are with stop-motion photography and cinema. Exposing the juvenile dementia of aerial survey's infancy releases practitioners and critics from the impulse to defend or demolish its ‘enlightenment’ credentials.
APA, Harvard, Vancouver, ISO, and other styles
32

Leberl, Franz, Horst Bischof, Thomas Pock, Arnold Irschara, and Stefan Kluckner. "Aerial Computer Vision for a 3D Virtual Habitat." Computer 43, no. 6 (June 2010): 24–31. http://dx.doi.org/10.1109/mc.2010.156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Dyce, Matt. "Canada between the photograph and the map: Aerial photography, geographical vision and the state." Journal of Historical Geography 39 (January 2013): 69–84. http://dx.doi.org/10.1016/j.jhg.2012.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wester-Ebbinghaus, W. "AERIAL PHOTOGRAPHY BY RADIO CONTROLLED MODEL HELICOPTER." Photogrammetric Record 10, no. 55 (August 26, 2006): 85–92. http://dx.doi.org/10.1111/j.1477-9730.1980.tb00006.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hiby, A. R., D. Thompson, and A. J. Ward. "CENSUS OF GREY SEALS BY AERIAL PHOTOGRAPHY." Photogrammetric Record 12, no. 71 (August 26, 2006): 589–94. http://dx.doi.org/10.1111/j.1477-9730.1988.tb00607.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Anikeeva, I. A. "Method of numerical estimating aerial images indicators quality for mapping purposes." Geodesy and Cartography 968, no. 2 (March 20, 2021): 29–37. http://dx.doi.org/10.22389/0016-7126-2021-968-2-29-37.

Full text
Abstract:
The task of assessing the quality of aerial imagery, obtained for mapping, in terms of vision properties, is very ambiguous due to the lack of objective criteria and evaluation methods. A system of indicators for aerial images quality and methods of their numerical assessment is presented. The fine aerial image’s quality is characterized by a set of its structural and gradation properties. The structural properties of the image are determined by the actual spatial resolution and photographic sharpness. Gradation properties of an image are characterized by the correct color rendering, the level of random noise and information completeness indicators – haze, radiometric resolution and the percentage of information loss in illumination and shadows.Methods of evaluating these indicators are formulated, and their recommended and acceptable numerical values are determined analytically. To clarify and correct the obtained analytical recommended and acceptable numerical values of the image quality indicators of their practical application possibility and further experimental studies are necessary with materials, obtained through various airborne imaging sensors for mapping.
APA, Harvard, Vancouver, ISO, and other styles
37

Mendes, Odilon Linhares Carvalho, and Giovanna Miceli Ronzani Borille. "Computer vision systems in unmanned aerial vehicle: a review." Journal of Mechatronics Engineering 2, no. 2 (July 30, 2019): 11. http://dx.doi.org/10.21439/jme.v2i2.26.

Full text
Abstract:
Inspections in areas of difficult access or hostile to the human, pattern recognition, surveillance and monitoring, are some of the many applications in with Unmanned Aerial Vehicles (UAV), can be a solution, opening up new perspectives for the use of this technology. The navigation and the position of the UAVs can be made by autonomous method through the computational vision, which is a technology of construction of artificial systems capable of read information from images or any multidimensional data and making decisions. This work presents a review of the use of computer vision systems by UAVs, with a focus on its many applications. The main objective is to analyze the latest technologies used for the development of computer vision in UAVs, through the tools of data search, information storage and, mainly, processing and analysis of data. The researches encompasses a publication of recent works, 2011 onwards, from the Science Direct portal. For each work were analyzed the objectives, methodology and results. Based in this analysis, was made a comparison between the techniques and their challenges. From this, future outlook scenarios of UAVs using computational vision are mentioned.
APA, Harvard, Vancouver, ISO, and other styles
38

Bawden, M. P. "APPLICATIONS OF AERIAL PHOTOGRAPHY IN LAND SYSTEM MAPPING." Photogrammetric Record 5, no. 30 (August 26, 2006): 461–64. http://dx.doi.org/10.1111/j.1477-9730.1967.tb00897.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Graham, R. W., and G. H. Tthomson. "CONSIDERATIONS OF SPECIFICATIONS FOR SMALL FORMAT AERIAL PHOTOGRAPHY." Photogrammetric Record 13, no. 74 (August 26, 2006): 225. http://dx.doi.org/10.1111/j.1477-9730.1989.tb00672.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Marks, A. R. "AERIAL PHOTOGRAPHY FROM A TETHERED HELIUM FILLED BALLOON." Photogrammetric Record 13, no. 74 (August 26, 2006): 257–61. http://dx.doi.org/10.1111/j.1477-9730.1989.tb00677.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Goba, N. L., and E. M. Senese. "THE STATUS OF SUPPLEMENTARY AERIAL PHOTOGRAPHY IN ONTARIO." Photogrammetric Record 14, no. 80 (August 26, 2006): 283–91. http://dx.doi.org/10.1111/j.1477-9730.1992.tb00253.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Warner, W. S., and L. E. Blankenberg. "Bundle Adjustment For 35 mm Oblique Aerial Photography." Photogrammetric Record 15, no. 86 (October 1995): 217–24. http://dx.doi.org/10.1111/0031-868x.00027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Lishan, Yuji Yang, Hui Cheng, and Xuechen Chen. "Autonomous Vision-Based Aerial Grasping for Rotorcraft Unmanned Aerial Vehicles." Sensors 19, no. 15 (August 3, 2019): 3410. http://dx.doi.org/10.3390/s19153410.

Full text
Abstract:
Autonomous vision-based aerial grasping is an essential and challenging task for aerial manipulation missions. In this paper, we propose a vision-based aerial grasping system for a Rotorcraft Unmanned Aerial Vehicle (UAV) to grasp a target object. The UAV system is equipped with a monocular camera, a 3-DOF robotic arm with a gripper and a Jetson TK1 computer. Efficient and reliable visual detectors and control laws are crucial for autonomous aerial grasping using limited onboard sensing and computational capabilities. To detect and track the target object in real time, an efficient proposal algorithm is presented to reliably estimate the region of interest (ROI), then a correlation filter-based classifier is developed to track the detected object. Moreover, a support vector regression (SVR)-based grasping position detector is proposed to improve the grasp success rate with high computational efficiency. Using the estimated grasping position and the UAV?Äôs states, novel control laws of the UAV and the robotic arm are proposed to perform aerial grasping. Extensive simulations and outdoor flight experiments have been implemented. The experimental results illustrate that the proposed vision-based aerial grasping system can autonomously and reliably grasp the target object while working entirely onboard.
APA, Harvard, Vancouver, ISO, and other styles
44

Çelik, Koray, and Arun K. Somani. "Monocular Vision SLAM for Indoor Aerial Vehicles." Journal of Electrical and Computer Engineering 2013 (2013): 1–15. http://dx.doi.org/10.1155/2013/374165.

Full text
Abstract:
This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV) with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.
APA, Harvard, Vancouver, ISO, and other styles
45

Faraji, Mohammad Reza, Xiaojun Qi, and Austin Jensen. "Computer vision–based orthorectification and georeferencing of aerial image sets." Journal of Applied Remote Sensing 10, no. 3 (September 22, 2016): 036027. http://dx.doi.org/10.1117/1.jrs.10.036027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Thomson, G. H. "EVALUATING IMAGE QUALITY OF SMALL FORMAT AERIAL PHOTOGRAPHY SYSTEMS." Photogrammetric Record 12, no. 71 (August 26, 2006): 595–603. http://dx.doi.org/10.1111/j.1477-9730.1988.tb00608.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Davison, M. "HEIGHTING ACCURACY TEST FROM 1:2000 SCALE AERIAL PHOTOGRAPHY." Photogrammetric Record 14, no. 84 (October 1994): 922–25. http://dx.doi.org/10.1111/j.1477-9730.1994.tb00293.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Timofte, Radu, Luc Van Gool, Ming-Hsuan Yang, Shai Avidan, Yasuyuki Matsushita, and Qingxiong Yang. "Guest Editorial: Vision and Computational Photography and Graphics." Computer Vision and Image Understanding 168 (March 2018): 1–2. http://dx.doi.org/10.1016/j.cviu.2018.02.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Belmonte, Lidia María, Rafael Morales, and Antonio Fernández-Caballero. "Computer Vision in Autonomous Unmanned Aerial Vehicles—A Systematic Mapping Study." Applied Sciences 9, no. 15 (August 5, 2019): 3196. http://dx.doi.org/10.3390/app9153196.

Full text
Abstract:
Personal assistant robots provide novel technological solutions in order to monitor people’s activities, helping them in their daily lives. In this sense, unmanned aerial vehicles (UAVs) can also bring forward a present and future model of assistant robots. To develop aerial assistants, it is necessary to address the issue of autonomous navigation based on visual cues. Indeed, navigating autonomously is still a challenge in which computer vision technologies tend to play an outstanding role. Thus, the design of vision systems and algorithms for autonomous UAV navigation and flight control has become a prominent research field in the last few years. In this paper, a systematic mapping study is carried out in order to obtain a general view of this subject. The study provides an extensive analysis of papers that address computer vision as regards the following autonomous UAV vision-based tasks: (1) navigation, (2) control, (3) tracking or guidance, and (4) sense-and-avoid. The works considered in the mapping study—a total of 144 papers from an initial set of 2081—have been classified under the four categories above. Moreover, type of UAV, features of the vision systems employed and validation procedures are also analyzed. The results obtained make it possible to draw conclusions about the research focuses, which UAV platforms are mostly used in each category, which vision systems are most frequently employed, and which types of tests are usually performed to validate the proposed solutions. The results of this systematic mapping study demonstrate the scientific community’s growing interest in the development of vision-based solutions for autonomous UAVs. Moreover, they will make it possible to study the feasibility and characteristics of future UAVs taking the role of personal assistants.
APA, Harvard, Vancouver, ISO, and other styles
50

Wijaszka, Mirosław. "Vision-Guided Control Algorithms for Micro Aerial Vehicles." Research Works of Air Force Institute of Technology 33, no. 1 (January 1, 2013): 271–88. http://dx.doi.org/10.2478/afit-2013-0016.

Full text
Abstract:
AbstractThis study presents an image analysis method used in the vision guided control system for Micro Air Vehicles (MAVs). The paper describes a hypothetical model of a MAV located in the GPS-denied unknown environment, somewhere indoors. The model keeps moving autonomously following ‘the track’ marked with corners and other feature points recorded with a monocular camera pointed at the far end of a corridor and slightly tilted down at the angle β (20° - 30°). The flight stability and control are provided with an on-board autopilot that maintains zero pitch and roll angles and constant altitude. The image analysis has been based on the real-time computer vision library - OpenCV (Open Source Computer Vision library - http://opencv.willowgarage.com).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography