Academic literature on the topic 'Computer vision Aerial photography'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer vision Aerial photography.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer vision Aerial photography"

1

Lopatin, Yaroslav, and Wilhelm Heger. "GEODESY, CARTOGRAPHY, AND AERIAL PHOTOGRAPHY." GEODESY, CARTOGRAPHY, AND AERIAL PHOTOGRAPHY 93,2021, no. 93 (June 23, 2021): 5–12. http://dx.doi.org/10.23939/istcgcap2021.93.005.

Full text
Abstract:
The aim of the work is to develop an automated measuring system in a mechanical gyrocompass with the help of specially developed hardware and software in order to facilitate the operation of the device and minimize observer errors. The developed complex provides automation only for the time method, as for the method of the turning point it is necessary to constantly contact the motion screw of the total station. The project is based on an integrated system, the hardware part of which contains a single-board computer, camera, and lens. The main software is a developed motion recognition algorithm with the help of image processing. This algorithm was created using the Python programming language and the open-source computer vision library OpenCV. With the help of the hardware, a video image of the gyroscope's reference scale is obtained, and with the help of the software, the moving light indicator and its position relative to the scale are identified in this image. The result of the study is a functioning automatic measurement system, which determines the value of the azimuth of the direction with the same accuracy as manual measurements. The system is controlled remotely via a computer and wi-fi network. To test the system, a series of automatic and manual measurements were performed simultaneously at the same point for the same direction. Based on the results obtained, it can be stated that the accuracy of the system is within the limits specified by the manufacturer of the device for manual measurements. The application of computer vision technology, namely the tracking of a moving object in the image for gyroscopic measurements can give a significant impetus to the development of automation systems for a wide range of measuring instruments, which in turn can improve the accuracy of measurement results. The developed system can be used together with the Gyromax AK-2M gyrocompass of GeoMessTechnik for carrying out automated measurements, training of new operators. With the help of the developed model, it is possible to avoid gross errors of the observer, to facilitate the measurement process which will not demand the constant presence of the operator near the device. In some dangerous conditions, this is a significant advantage.
APA, Harvard, Vancouver, ISO, and other styles
2

Verhoeven, Geert. "Taking computer vision aloft - archaeological three-dimensional reconstructions from aerial photographs with photoscan." Archaeological Prospection 18, no. 1 (January 2011): 67–73. http://dx.doi.org/10.1002/arp.399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jacob-Loyola, Nicolás, Felipe Muñoz-La Rivera, Rodrigo F. Herrera, and Edison Atencio. "Unmanned Aerial Vehicles (UAVs) for Physical Progress Monitoring of Construction." Sensors 21, no. 12 (June 20, 2021): 4227. http://dx.doi.org/10.3390/s21124227.

Full text
Abstract:
The physical progress of a construction project is monitored by an inspector responsible for verifying and backing up progress information, usually through site photography. Progress monitoring has improved, thanks to advances in image acquisition, computer vision, and the development of unmanned aerial vehicles (UAVs). However, no comprehensive and simple methodology exists to guide practitioners and facilitate the use of these methods. This research provides recommendations for the periodic recording of the physical progress of a construction site through the manual operation of UAVs and the use of point clouds obtained under photogrammetric techniques. The programmed progress is then compared with the actual progress made in a 4D BIM environment. This methodology was applied in the construction of a reinforced concrete residential building. The results showed the methodology is effective for UAV operation in the work site and the use of the photogrammetric visual records for the monitoring of the physical progress and the communication of the work performed to the project stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
4

Park, J. W., H. H. Jeong, J. S. Kim, and C. U. Choi. "Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 941–44. http://dx.doi.org/10.5194/isprsarchives-xli-b7-941-2016.

Full text
Abstract:
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
APA, Harvard, Vancouver, ISO, and other styles
5

Park, J. W., H. H. Jeong, J. S. Kim, and C. U. Choi. "Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 941–44. http://dx.doi.org/10.5194/isprs-archives-xli-b7-941-2016.

Full text
Abstract:
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
APA, Harvard, Vancouver, ISO, and other styles
6

Shu, Joseph Shou-Pyng, and Herbert Freeman. "Cloud shadow removal from aerial photographs." Pattern Recognition 23, no. 6 (January 1990): 647–56. http://dx.doi.org/10.1016/0031-3203(90)90040-r.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Yu, Chunxue Wu, Qunhui Wu, Zelda Makati Eli, Naixue Xiong, and Sheng Zhang. "Design and Analysis of Refined Inspection of Field Conditions of Oilfield Pumping Wells Based on Rotorcraft UAV Technology." Electronics 8, no. 12 (December 9, 2019): 1504. http://dx.doi.org/10.3390/electronics8121504.

Full text
Abstract:
The traditional oil well monitoring method relies on manual acquisition and various high-precision sensors. Using the indicator diagram to judge the working condition of the well is not only difficult to establish but also consumes huge manpower and financial resources. This paper proposes the use of computer vision in the detection of working conditions in oil extraction. Combined with the advantages of an unmanned aerial vehicle (UAV), UAV aerial photography images are used to realize real-time detection of on-site working conditions by real-time tracking of the working status of the head working and other related parts of the pumping unit. Considering the real-time performance of working condition detection, this paper proposes a framework that combines You only look once version 3 (YOLOv3) and a sort algorithm to complete multi-target tracking in the form of tracking by detection. The quality of the target detection in the framework is the key factor affecting the tracking effect. The experimental results show that a good detector makes the tracking speed achieve the real-time effect and provides help for the real-time detection of the working condition, which has a strong practical application.
APA, Harvard, Vancouver, ISO, and other styles
8

Marchewka, Adam, Patryk Ziółkowski, and Victor Aguilar-Vidal. "Framework for Structural Health Monitoring of Steel Bridges by Computer Vision." Sensors 20, no. 3 (January 27, 2020): 700. http://dx.doi.org/10.3390/s20030700.

Full text
Abstract:
The monitoring of a structural condition of steel bridges is an important issue. Good condition of infrastructure facilities ensures the safety and economic well-being of society. At the same time, due to the continuous development, rising wealth of the society and socio-economic integration of countries, the number of infrastructural objects is growing. Therefore, there is a need to introduce an easy-to-use and relatively low-cost method of bridge diagnostics. We can achieve these benefits by the use of Unmanned Aerial Vehicle-Based Remote Sensing and Digital Image Processing. In our study, we present a state-of-the-art framework for Structural Health Monitoring of steel bridges that involves literature review on steel bridges health monitoring, drone route planning, image acquisition, identification of visual markers that may indicate a poor condition of the structure and determining the scope of applicability. The presented framework of image processing procedure is suitable for diagnostics of steel truss riveted bridges. In our considerations, we used photographic documentation of the Fitzpatrick Bridge located in Tallassee, Alabama, USA.
APA, Harvard, Vancouver, ISO, and other styles
9

Verhoeven, G., M. Doneus, Ch Briese, and F. Vermeulen. "Mapping by matching: a computer vision-based approach to fast and accurate georeferencing of archaeological aerial photographs." Journal of Archaeological Science 39, no. 7 (July 2012): 2060–70. http://dx.doi.org/10.1016/j.jas.2012.02.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fuentes, Sigfredo, Gabriela Chacon, Damir D. Torrico, Andrea Zarate, and Claudia Gonzalez Viejo. "Spatial Variability of Aroma Profiles of Cocoa Trees Obtained through Computer Vision and Machine Learning Modelling: A Cover Photography and High Spatial Remote Sensing Application." Sensors 19, no. 14 (July 11, 2019): 3054. http://dx.doi.org/10.3390/s19143054.

Full text
Abstract:
Cocoa is an important commodity crop, not only to produce chocolate, one of the most complex products from the sensory perspective, but one that commonly grows in developing countries close to the tropics. This paper presents novel techniques applied using cover photography and a novel computer application (VitiCanopy) to assess the canopy architecture of cocoa trees in a commercial plantation in Queensland, Australia. From the cocoa trees monitored, pod samples were collected, fermented, dried, and ground to obtain the aroma profile per tree using gas chromatography. The canopy architecture data were used as inputs in an artificial neural network (ANN) algorithm, with the aroma profile, considering six main aromas, as targets. The ANN model rendered high accuracy (correlation coefficient (R) = 0.82; mean squared error (MSE) = 0.09) with no overfitting. The model was then applied to an aerial image of the whole cocoa field studied to produce canopy vigor, and aroma profile maps up to the tree-by-tree scale. The tool developed could significantly aid the canopy management practices in cocoa trees, which have a direct effect on cocoa quality.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Computer vision Aerial photography"

1

Sando, Jean M., and Robert(Robert B. ). McGhee. "A texture analysis approach to computer vision for identification of roads in aerial photographs." Thesis, Monterey, California. Naval Postgraduate School, 1987. http://hdl.handle.net/10945/22548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bortolot, Zachary Jared. "An Adaptive Computer Vision Technique for Estimating the Biomass and Density of Loblolly Pine Plantations using Digital Orthophotography and LiDAR Imagery." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/27454.

Full text
Abstract:
Forests have been proposed as a means of reducing atmospheric carbon dioxide levels due to their ability to store carbon as biomass. To quantify the amount of atmospheric carbon sequestered by forests, biomass and density estimates are often needed. This study develops, implements, and tests an individual tree-based algorithm for obtaining forest density and biomass using orthophotographs and small footprint LiDAR imagery. It was designed to work with a range of forests and image types without modification, which is accomplished by using generic properties of trees found in many types of images. Multiple parameters are employed to determine how these generic properties are used. To set these parameters, training data is used in conjunction with an optimization algorithm (a modified Nelder-Mead simplex algorithm or a genetic algorithm). The training data consist of small images in which density and biomass are known. A first test of this technique was performed using 25 circular plots (radius = 15 m) placed in young pine plantations in central Virginia, together with false color othophotograph (spatial resolution = 0.5 m) or small footprint LiDAR (interpolated to 0.5 m) imagery. The highest density prediction accuracies (r2 up to 0.88, RMSE as low as 83 trees / ha) were found for runs where photointerpreted densities were used for training and testing. For tests run using density measurements made on the ground, accuracies were consistency higher for orthophotograph-based results than for LiDAR-based results, and were higher for trees with DBH ≥10cm than for trees with DBH ≥7 cm. Biomass estimates obtained by the algorithm using LiDAR imagery had a lower RMSE (as low as 15.6 t / ha) than most comparable studies. The correlations between the actual and predicted values (r2 up to 0.64) were lower than comparable studies, but were generally highly significant (p ≤ 0.05 or 0.01). In all runs there was no obvious relationship between accuracy and the amount of training data used, but the algorithm was sensitive to which training and testing data were selected. Methods were evaluated for combining predictions made using different parameter sets obtained after training using identical data. It was found that averaging the predictions produced improved results. After training using density estimates from the human photointerpreter, 89% of the trees located by the algorithm corresponded to trees found by the human photointerpreter. A comparison of the two optimization techniques found them to be comparable in speed and effectiveness.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Bradley, Justin Mathew. "Particle Filter Based Mosaicking for Forest Fire Tracking." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gombos, Andrew David. "DETECTION OF ROOF BOUNDARIES USING LIDAR DATA AND AERIAL PHOTOGRAPHY." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_theses/75.

Full text
Abstract:
The recent growth in inexpensive laser scanning sensors has created entire fields of research aimed at processing this data. One application is determining the polygonal boundaries of roofs, as seen from an overhead view. The resulting building outlines have many commercial as well as military applications. My work in this area has created a segmentation algorithm where the descriptive features are computationally and theoretically simpler than previous methods. A support vector machine is used to segment data points using these features, and their use is not common for roof detection to date. Despite the simplicity of the feature calculations, the accuracy of our algorithm is similar to previous work. I also describe a basic polygonal extraction method, which is acceptable for basic roofs.
APA, Harvard, Vancouver, ISO, and other styles
5

Zuniga, Oscar A. "Low level and intermediate level vision in aerial images." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/54478.

Full text
Abstract:
Low-level and intermediate-level computer vision tasks are regarded as transformations from lower to higher-level representations of the image information. An edge-based representation that makes explicit linear features and their spatial relationships is developed. Examples are presented in the scene domain of aerial images of urban scenes containing man-made structures. The techniques used are based on a common structural and statistical model of the image data. This model assumes that the image data is adequately represented locally by a bivariate cubic polynomial plus additive independent Gaussian noise. This model, although simple, is shown to be useful for the design of effective computer vision solving tasks. Four low-level computer vision modules are developed. First, a gradient operator which reduces sharply the gradient direction estimate bias that plagues current operators while also reducing sensitivity to noise. Secondly, a Bayes decision procedure for automatic gradient threshold selection that produces results which are superior to those obtained by the best subjective threshold. Thirdly, the new gradient operator and automatic gradient threshold selection are used in Haralick's directional zero-crossing edge operator resulting in improved performance. Finally, a graytone corner detector with significantly better probability of correct corner assignment than other corner detectors available in the literature. Intermediate-level modules are developed for the construction of a number of intermediate level units from linear features. Among these is a linear segment extraction method that uses both, zero-crossing positional and angular information together with their distributional characteristics to accomplish optimal linear segment fitting. Methods for hypothesizing comers and relations of parallelism and collinearity among pairs of linear segments are developed. These relations are used to build higher-level groupings of linear segments that are likely to correspond to cultural objects.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Holt, Ryan Samuel. "Three enabling technologies for vision-based, forest-fire perimeter surveillance using multiple unmanned aerial systems /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1894.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ludington, Ben T. "Particle filter tracking architecture for use onboard unmanned aerial vehilces." Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-11142006-152845/.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2007.
Vachtsevanos, George, Committee Chair ; Heck, Bonnie, Committee Member ; Vela, Patricio, Committee Member ; Yezi, Anthony, Committee Member ; Johnson, Eric, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
8

Vendra, Soujanya. "Addressing corner detection issues for machine vision based UAV aerial refueling." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4551.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains xi, 121 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 90-95).
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Caixia. "Using Linear Features for Aerial Image Sequence Mosaiking." Fogler Library, University of Maine, 2004. http://www.library.umaine.edu/theses/pdf/WangC2004.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Edwards, Barrett Bruce. "An Onboard Vision System for Unmanned Aerial Vehicle Guidance." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2381.

Full text
Abstract:
The viability of small Unmanned Aerial Vehicles (UAVs) as a stable platform for specific application use has been significantly advanced in recent years. Initial focus of lightweight UAV development was to create a craft capable of stable and controllable flight. This is largely a solved problem. Currently, the field has progressed to the point that unmanned aircraft can be carried in a backpack, launched by hand, weigh only a few pounds and be capable of navigating through unrestricted airspace. The most basic use of a UAV is to visually observe the environment and use that information to influence decision making. Previous attempts at using visual information to control a small UAV used an off-board approach where the video stream from an onboard camera was transmitted down to a ground station for processing and decision making. These attempts achieved limited results as the two-way transmission time introduced unacceptable amounts of latency into time-sensitive control algorithms. Onboard image processing offers a low-latency solution that will avoid the negative effects of two-way communication to a ground station. The first part of this thesis will show that onboard visual processing is capable of meeting the real-time control demands of an autonomous vehicle, which will also include the evaluation of potential onboard computing platforms. FPGA-based image processing will be shown to be the ideal technology for lightweight unmanned aircraft. The second part of this thesis will focus on the exact onboard vision system implementation for two proof-of-concept applications. The first application describes the use of machine vision algorithms to locate and track a target landing site for a UAV. GPS guidance was insufficient for this task. A vision system was utilized to localize the target site during approach and provide course correction updates to the UAV. The second application describes a feature detection and tracking sub-system that can be used in higher level application algorithms.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Computer vision Aerial photography"

1

Sando, Jean M. A texture analysis approach to computer vision for identification of roads in aerial photographs. Monterey, Calif: Naval Postgraduate School, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gunst, Marlies de. Knowledge-based interpretation of aerial images for updating of road maps. Delft, The Netherlands: Nederlandse Commissie voor Geodesie, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Through the eyes of the condor: An aerial vision of Latin America. Washington, DC: National Geographic, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gevers, Theo. Color in computer vision: Fundamentals and applications. Hoboken, NJ: Wiley, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vision & voice: Refining your vision in Adobe Photoshop lightroom. Berkeley, CA: New Riders, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wall, Jeff. Jeff Wall: Space and vision. München: Schirmer/Mosel, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jacobs, Corinna. Interactive Panoramas: Techniques for Digital Panoramic Photography. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Computational photography: Methods and applications. Boca Raton, FL: CRC Prss, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Matsuyama, Takashi. SIGMA: A knowledge-based aerial image understanding system. New York: Plenum, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hubbell, Gerald R. Scientific Astrophotography: How Amateurs Can Generate and Use Professional Imaging Data. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Computer vision Aerial photography"

1

Hößler, Tom, and Tom Landgraf. "Automated Traffic Analysis in Aerial Images." In Computer Vision and Graphics, 262–69. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11331-9_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jȩdrasiak, Karol, and Aleksander Nawrat. "Image Recognition Technique for Unmanned Aerial Vehicles." In Computer Vision and Graphics, 391–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02345-3_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Reilly, Vladimir, Berkan Solmaz, and Mubarak Shah. "Geometric Constraints for Human Detection in Aerial Imagery." In Computer Vision – ECCV 2010, 252–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15567-3_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rigau, Jaume, Miquel Feixas, and Mateu Sbert. "Image Information in Digital Photography." In Computer Vision – ACCV 2010 Workshops, 122–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22819-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mnih, Volodymyr, and Geoffrey E. Hinton. "Learning to Detect Roads in High-Resolution Aerial Images." In Computer Vision – ECCV 2010, 210–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15567-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Dawei, Yuankai Qi, Hongyang Yu, Yifan Yang, Kaiwen Duan, Guorong Li, Weigang Zhang, Qingming Huang, and Qi Tian. "The Unmanned Aerial Vehicle Benchmark: Object Detection and Tracking." In Computer Vision – ECCV 2018, 375–91. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Stan Z., Josef Kittler, and Maria Petrou. "Matching and recognition of road networks from aerial images." In Computer Vision — ECCV'92, 857–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55426-2_99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Di, Xuhong Li, Lichao Mou, Pu Jin, Dong Chen, Liping Jing, Xiaoxiang Zhu, and Dejing Dou. "Cross-Task Transfer for Geotagged Audiovisual Aerial Scene Recognition." In Computer Vision – ECCV 2020, 68–84. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58586-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Iwaneczko, Paweł, Karol Jędrasiak, Krzysztof Daniec, and Aleksander Nawrat. "A Prototype of Unmanned Aerial Vehicle for Image Acquisition." In Computer Vision and Graphics, 87–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33564-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kluckner, Stefan, Thomas Mauthner, Peter M. Roth, and Horst Bischof. "Semantic Classification in Aerial Imagery by Integrating Appearance and Height Information." In Computer Vision – ACCV 2009, 477–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12304-7_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer vision Aerial photography"

1

Knowles, James, James J. Pearson, Brian Ringer, and Joan B. Lurie. "Model-based object recognition in aerial photography." In Interdisciplinary Computer Vision: Applications and Changing Needs--22nd AIPR Workshop, edited by J. Michael Selander. SPIE, 1994. http://dx.doi.org/10.1117/12.169474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Prades, Ignacio J., Jorge Nunez, Fernando Perez, Vincenc Pala, and Roman Arbiol. "Aerial photography restoration using the maximum likelihood estimator (MLE) algorithm." In Spatial Information from Digital Photogrammetry and Computer Vision: ISPRS Commission III Symposium, edited by Heinrich Ebner, Christian Heipke, and Konrad Eder. SPIE, 1994. http://dx.doi.org/10.1117/12.182866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hasinoff, Samuel W., Kiriakos N. Kutulakos, Fredo Durand, and William T. Freeman. "Time-constrained photography." In 2009 IEEE 12th International Conference on Computer Vision (ICCV). IEEE, 2009. http://dx.doi.org/10.1109/iccv.2009.5459269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vaquero, Daniel, and Matthew Turk. "Composition Context Photography." In 2015 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2015. http://dx.doi.org/10.1109/wacv.2015.92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kappal, S. J., and S. G. Narasimhan. "Illustrating motion through DLP photography." In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR Workshops). IEEE, 2009. http://dx.doi.org/10.1109/cvpr.2009.5204315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kappal, Sanjeev J., and Srinivasa G. Narasimhan. "Illustrating motion through DLP photography." In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2009. http://dx.doi.org/10.1109/cvprw.2009.5204315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fan, Ye-wen, Jian-ya Gong, Chen-guo Tan, and Rui-yao Wang. "Research on Computer-Assisted DEM-Based Flight Plan of Aerial Photography." In 2008 First International Conference on Intelligent Networks and Intelligent Systems (ICINIS). IEEE, 2008. http://dx.doi.org/10.1109/icinis.2008.79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hart, David, Jessica Greenland, and Bryan Morse. "Style Transfer for Light Field Photography." In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2020. http://dx.doi.org/10.1109/wacv45572.2020.9093478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vazquez, Marynel, and Aaron Steinfeld. "An assisted photography method for street scenes." In 2011 IEEE Workshop on Applications of Computer Vision (WACV). IEEE, 2011. http://dx.doi.org/10.1109/wacv.2011.5711488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fang, Yuming, Hanwei Zhu, Yan Zeng, Kede Ma, and Zhou Wang. "Perceptual Quality Assessment of Smartphone Photography." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography