Academic literature on the topic 'Autonomous drone navigation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Autonomous drone navigation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Autonomous drone navigation"

1

Journal, IJSREM. "Autonomous Drone Navigation." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 11 (2023): 1–11. http://dx.doi.org/10.55041/ijsrem27050.

Full text
Abstract:
Autonomous drone navigation has evolved significantly as a result of the integration of cutting-edge computer vision and reinforcement learning algorithms. In this study, we propose a comprehensive framework that combines the adaptability of the TD3 neural network, the agility of the Ros2 platform, and the resilience of the YOLOv3 object detection model. Making use of YOLOv3's real-time object detection capabilities, our system proves to be highly adept at recognizing and reacting to changing environmental impediments, which guarantees improved UAV situational awareness. When combined with the TD3 neural network, the Ros2-based navigation framework allows for quick decision- making, which allows the drone to move independently toward predetermined destination points while effectively avoiding obstructions. Our work demonstrates the effectiveness of our suggested strategy through thorough experimentation in simulated situations, highlighting its potential for practical implementation in a variety of applications, from autonomous delivery services to aerial surveillance. The outcomes demonstrate the integrated system's flexibility and resilience, highlighting its capacity to successfully handle challenging navigational situations and guarantee secure and effective aerial operations. Our study adds to the field of autonomous drone navigation by highlighting the vital role that integrated computer vision and machine learning techniques play in enabling smart and flexible unmanned aerial vehicle operations. Keywords— Autonomous drone, YOLOv3, Object detection, Ros2, TD3, Reinforcement learning, Simulated environment, Aerial navigation, obstacle avoidance, Computer vision, Unmanned aerial vehicles (UAVs),Goal-point navigation, Aerial surveillance
APA, Harvard, Vancouver, ISO, and other styles
2

Gupta, Myra. "Reinforcement Learning for Autonomous Drone Navigation." Innovative Research Thoughts 9, no. 5 (2023): 11–20. http://dx.doi.org/10.36676/irt.2023-v9i5-002.

Full text
Abstract:
Drone navigation involves the process of controlling the movement and flight path of unmanned aerial vehicles (UAVs). It encompasses both the hardware and software systems that enable drones to navigate and maneuver autonomously or under the guidance of a human operator. The utility of drone navigation is vast and varied, making it a critical component in numerous industries and applications. Firstly, drone navigation plays a crucial role in aerial surveillance and reconnaissance. Drones equipped with advanced navigation systems can efficiently patrol large areas, monitor activities, and gather real-time data from various perspectives. This capability is particularly valuable in security and law enforcement operations, disaster response, and environmental monitoring, where access and visibility might be limited.
APA, Harvard, Vancouver, ISO, and other styles
3

Yulianto, Ahmad Wilda, Dhandi Yudhit Yuniar, and Yoyok Heru Prasetyo. "Navigation and Guidance for Autonomous Quadcopter Drones Using Deep Learning on Indoor Corridors." Jurnal Jartel Jurnal Jaringan Telekomunikasi 12, no. 4 (2022): 258–64. http://dx.doi.org/10.33795/jartel.v12i4.422.

Full text
Abstract:
Autonomous drones require accurate navigation and localization algorithms to carry out their duties. Outdoors drones can utilize GPS for navigation and localization systems. However, GPS is often unreliable or not available at all indoors. Therefore, in this research, an autonomous indoor drone navigation model was created using a deep learning algorithm, to assist drone navigation automatically, especially in indoor corridor areas. In this research, only the Caddx Ratel 2 FPV camera mounted on the drone was used as an input for the deep learning model to navigate the drone forward without a collision with the wall in the corridor. This research produces two deep learning models, namely, a rotational model to overcome a drone's orientation deviations with a loss of 0.0010 and a mean squared error of 0.0009, and a translation model to overcome a drone's translation deviation with a loss of 0.0140 and a mean squared error of 0.011. The implementation of the two models on autonomous drones reaches an NCR value of 0.2. The conclusion from the results obtained in this research is that the difference in resolution and FOV value in the actual image captured by the FPV camera on the drone with the image used for training the deep learning model results in a discrepancy in the output value during the implementation of the deep learning model on autonomous drones and produces low NCR implementation values.
APA, Harvard, Vancouver, ISO, and other styles
4

Rodriguez, Angel A., Mohammad Shekaramiz, and Mohammad A. S. Masoum. "Computer Vision-Based Path Planning with Indoor Low-Cost Autonomous Drones: An Educational Surrogate Project for Autonomous Wind Farm Navigation." Drones 8, no. 4 (2024): 154. http://dx.doi.org/10.3390/drones8040154.

Full text
Abstract:
The application of computer vision in conjunction with GPS is essential for autonomous wind turbine inspection, particularly when the drone navigates through a wind farm to detect the turbine of interest. Although drones for such inspections use GPS, our study only focuses on the computer vision aspect of navigation that can be combined with GPS information for better navigation in a wind farm. Here, we employ an affordable, non-GPS-equipped drone within an indoor setting to serve educational needs, enhancing its accessibility. To address navigation without GPS, our solution leverages visual data captured by the drone’s front-facing and bottom-facing cameras. We utilize Hough transform, object detection, and QR codes to control drone positioning and calibration. This approach facilitates accurate navigation in a traveling salesman experiment, where the drone visits each wind turbine and returns to a designated launching point without relying on GPS. To perform experiments and investigate the performance of the proposed computer vision technique, the DJI Tello EDU drone and pedestal fans are used to represent commercial drones and wind turbines, respectively. Our detailed and timely experiments demonstrate the effectiveness of computer vision-based path planning in guiding the drone through a small-scale surrogate wind farm, ensuring energy-efficient paths, collision avoidance, and real-time adaptability. Although our efforts do not replicate the actual scenario of wind turbine inspection using drone technology, they provide valuable educational contributions for those willing to work in this area and educational institutions who are seeking to integrate projects like this into their courses, such as autonomous systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Joshi, Amit, Aivars Spilbergs, and Elīna Miķelsone. "AI-ENABLED DRONE AUTONOMOUS NAVIGATION AND DECISION MAKING FOR DEFENCE SECURITY." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 4 (June 22, 2024): 138–43. http://dx.doi.org/10.17770/etr2024vol4.8237.

Full text
Abstract:
The combination of Artificial Intelligence (AI) and unmanned aerial vehicles (UAVs), sometimes known as drones, has become a revolutionary approach in modern military and security operations. The purpose of this study is to explore and assess the efficiency of AI-enabled autonomous navigation and decision-making systems for drones in defense security applications. Through a comprehensive literature review, researchers analyze the various AI techniques and algorithms used in these systems, including machine learning, deep learning, and reinforcement learning. The study examines different aspects of autonomous drone navigation, such as sensors, decision-making modules, communication systems, and countermeasure systems. By reviewing scholarly articles and existing studies, researchers gain insights into the hardware and software components, including GPS modules, IMUs, cameras, and other sensors. This analysis provides a clear understanding of the current state of AI-enabled drone technology for defense security and identifies potential areas for future research and improvement. This research study discusses the working of AI-enabled drone autonomous navigation and decision-making systems designed primarily for defense security applications. The study starts by explaining the structure of drone navigation systems, which includes a wide range of hardware and software components. These comprise GPS modules for tracking location, inertial measurement units (IMUs) for estimating attitude, and cameras for seeing the environment. By incorporating these sensors into a sturdy structure, drones are able to detect their surroundings and man oeuvre independently in intricate situations. The effectiveness of AI-enabled drone navigation relies heavily on the application of sophisticated artificial intelligence techniques and algorithms. Machine learning algorithms, such as deep neural networks and reinforcement learning, are crucial in improving the decision-making abilities of drones. AI algorithms allow drones to dynamically adjust their navigation tactics, optimize flight trajectories, and intelligently respond to unforeseen obstacles or hazards by analyzing large volumes of sensor data in real-time. Furthermore, this research explores the datasets being employed in the training and evaluation of AI models for the purpose of drone navigation and decision-making. These datasets contain varied environmental conditions, topographical features, and security scenarios experienced in defensive operations.
APA, Harvard, Vancouver, ISO, and other styles
6

Sanchez-Rodriguez, Jose-Pablo, and Alejandro Aceves-Lopez. "A survey on stereo vision-based autonomous navigation for multi-rotor MUAVs." Robotica 36, no. 8 (2018): 1225–43. http://dx.doi.org/10.1017/s0263574718000358.

Full text
Abstract:
SUMMARYThis paper presents an overview of the most recent vision-based multi-rotor micro unmanned aerial vehicles (MUAVs) intended for autonomous navigation using a stereoscopic camera. Drone operation is difficult because pilots need the expertise to fly the drones. Pilots have a limited field of view, and unfortunate situations, such as loss of line of sight or collision with objects such as wires and branches, can happen. Autonomous navigation is an even more difficult challenge than remote control navigation because the drones must make decisions on their own in real time and simultaneously build maps of their surroundings if none is available. Moreover, MUAVs are limited in terms of useful payload capability and energy consumption. Therefore, a drone must be equipped with small sensors, and it must carry low weight. In addition, a drone requires a sufficiently powerful onboard computer so that it can understand its surroundings and navigate accordingly to achieve its goal safely. A stereoscopic camera is considered a suitable sensor because of its three-dimensional (3D) capabilities. Hence, a drone can perform vision-based navigation through object recognition and self-localise inside a map if one is available; otherwise, its autonomous navigation creates a simultaneous localisation and mapping problem.
APA, Harvard, Vancouver, ISO, and other styles
7

Gediya, Kalrav. "Enhancing Autonomous Drone Navigation: YOLOv5-Based Object Detection and Collision Avoidances." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem47337.

Full text
Abstract:
Abstract In recent years, drone technology has seen profound advancements, especially with regards to safe and autonomous operation, which heavily relies on object detection and avoidance capabilities. These autonomously functioning drones can operate in challenging environments for tasks like search and rescue operations, as well as industrial monitoring. The present research focuses on enhancing object detection for autonomous drones by utilizing publicly available image datasets instead of custom images. Datasets like VisDrone, DroneDeploy, and DOTA contain a plethora of stunning, real-life images that make them ideal candidates for improving the accuracy and robustness of object detection models. We propose an optimized method for training the YOLOv5 model to enhance object detection. The collected dataset undergoes evaluation from precision, recall, F1-score, and mAP through both CNN and YOLO models. The findings show that using YOLOv5 deep architecture to implement real-time object detection and avoidance in UAVs is more efficient than traditional CNN approaches. Keywords: Autonomous Drone, CNN, Yolo V5, Sensor Integration, Deep Learning.
APA, Harvard, Vancouver, ISO, and other styles
8

Prof. Dr. Kaushika Patel and Yash Sojitra. "Autonomous Surveillance Drone." International Journal of Scientific Research in Science, Engineering and Technology 12, no. 2 (2025): 750–54. https://doi.org/10.32628/ijsrset25122202.

Full text
Abstract:
This paper presents the design, development, and testing of a fully autonomous surveillance drone built using a quadcopter frame, Brushless DC (BLDC) motors, Electronic Speed Controllers (ESCs), Raspberry Pi 4, GPS, and a Pixhawk flight controller. The drone achieves autonomous navigation using telemetry communication and DroneKit-MAVProxy, a Python-based software framework, in conjunction with ground control tools like QGroundControl and Mission Planner. The goal is to create a low-cost, user-friendly UAV (Unmanned Aerial Vehicle) capable of performing tasks like area monitoring, GPS-based navigation, and data collection. Reliable testing was conducted to assess flight stability, GPS accuracy, communication range, and mission execution success. The results confirm the drone’s reliability in open-field scenarios, with potential for future enhancements such as AI-based object detection, collision avoidance, and multi-drone swarm operation
APA, Harvard, Vancouver, ISO, and other styles
9

Chronis, Christos, Georgios Anagnostopoulos, Elena Politi, Antonios Garyfallou, Iraklis Varlamis, and George Dimitrakopoulos. "Path planning of autonomous UAVs using reinforcement learning." Journal of Physics: Conference Series 2526, no. 1 (2023): 012088. http://dx.doi.org/10.1088/1742-6596/2526/1/012088.

Full text
Abstract:
Abstract Autonomous BVLOS Unmanned Aerial Vehicles (UAVs) are gradually gaining their share in the drone market. Together with the demand for extended levels of autonomy comes the necessity for high-performance obstacle avoidance and navigation algorithms that will allow autonomous drones to operate with minimum or no human intervention. Traditional AI algorithms have been extensively used in the literature for finding the shortest path in 2-D or 3-D environments and navigating the drones successfully through a known and stable environment. However, the situation can become much more complicated when the environment is changing or not known in advance. In this work, we explore the use of advanced artificial intelligence techniques, such as reinforcement learning, to successfully navigate a drone within unspecified environments. We compare our approach against traditional AI algoriths in a set of validation experiments on a simulation environment, and the results show that using only a couple of low-cost distance sensors it is possible to successfully navigate the drone beyond the obstacles.
APA, Harvard, Vancouver, ISO, and other styles
10

Garg, Inakshi, and Harsh Pandey. "Autonomous Drone Navigation Using Computer Vision." International Journal for Research in Applied Science and Engineering Technology 12, no. 10 (2024): 961–68. http://dx.doi.org/10.22214/ijraset.2024.64728.

Full text
Abstract:
Abstract: Autonomous drone navigation with computer vision is an innovative technology that allows drones to move through intricate surroundings without needing human control. This system uses live visual information to identify objects, steer clear of obstacles, and determine routes, improving operational safety and precision. The project's main objective is to create a visionbased navigation system that combines object detection and obstacle avoidance algorithms through deep learning methods like YOLO (You Only Look Once), alongside real-time sensor fusion. The drone uses computer vision algorithms to process aerial images and automatically changes its flight path to prevent crashes. A personalized dataset of aerial images is generated and utilized for improving object detection skills in various environments. In order to guarantee practicality in real- world situations, the system undergoes validation through simulations and field tests on different terrains, focusing on its resilience in changing environments. Improvements in navigation accuracy and obstacle detection are accomplished by implementing adaptive pathplanning and integrating multiple sensors, guaranteeing the drone's efficient operation in real-life situations. The goal of this method is to enhance the drone's ability to make decisions, minimize human mistakes, and expand its range of potential uses in areas like surveillance, agriculture, and disaster response
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Autonomous drone navigation"

1

Habib, Yassine. "Monocular SLAM densification for 3D mapping and autonomous drone navigation." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0390.

Full text
Abstract:
Les drones aériens sont essentiels dans les missions de recherche et de sauvetage car ils permettent une reconnaissance rapide de la zone de la mission, tel qu’un bâtiment effondré. La cartographie 3D dense et métrique en temps réel est cruciale pour capturer la structure de l’environnement et permettre une navigation autonome. L’approche privilégiée pour cette tâche consiste à utiliser du SLAM (Simultaneous Localization and Mapping) à partir d’une caméra monoculaire synchronisée avec une centrale inertielle (IMU). Les algorithmes à l’état de l’art maximisent l’efficacité en triangulant un nombre minimum de points, construisant ainsi un nuage de points 3D épars. Quelques travaux traitent de la densification du SLAM monoculaire, généralement en utilisant des réseaux neuronaux profonds pour prédire une carte de profondeur dense à partir d’une seule image. La plupart ne sont pas métriques ou sont trop complexes pour être utilisés en embarqué. Dans cette thèse, nous identifions une méthode de SLAM monoculaire à l’état de l’art et l’évaluons dans des conditions difficiles pour les drones. Nous présentons une architecture fonctionnelle pour densifier le SLAM monoculaire en appliquant la prédiction de profondeur monoculaire pour construire une carte dense et métrique en voxels 3D.L’utilisation de voxels permet une construction et une maintenance efficaces de la carte par projection de rayons, et permet la fusion volumétrique multi-vues. Enfin, nous proposons une procédure de récupération d’échelle qui utilise les estimations de profondeur éparses et métriques du SLAM pour affiner les cartes de profondeur denses prédites. Notre approche a été évaluée sur des benchmarks conventionnels et montre des résultats prometteurs pour des applications pratiques<br>Aerial drones are essential in search and rescue missions as they provide fast reconnaissance of the mission area, such as a collapsed building. Creating a dense and metric 3D map in real-time is crucial to capture the structure of the environment and enable autonomous navigation. The recommended approach for this task is to use Simultaneous Localization and Mapping (SLAM) from a monocular camera synchronized with an Inertial Measurement Unit (IMU). Current state-of-the-art algorithms maximize efficiency by triangulating a minimum number of points, resulting in a sparse 3D point cloud. Few works address monocular SLAM densification, typically by using deep neural networks to predict a dense depth map from a single image. Most are not metric or are too complex for use in embedded applications. In this thesis, we identify and evaluate a state of-the-art monocular SLAM baseline under challenging drone conditions. We present a practical pipeline for densifying monocular SLAM by applying monocular depth prediction to construct a dense and metric 3D voxel map. Using voxels allows the efficient construction and maintenance of the map through raycasting, and allows for volumetric multi-view fusion. Finally, we propose a scale recovery procedure that uses the sparse and metric depth estimates of SLAM to refine the predicted dense depth maps. Our approach has been evaluated on conventional benchmarks and shows promising results for practical applications
APA, Harvard, Vancouver, ISO, and other styles
2

Sfard, Nathan. "Towards Autonomous Localization of an Underwater Drone." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1866.

Full text
Abstract:
Autonomous vehicle navigation is a complex and challenging task. Land and aerial vehicles often use highly accurate GPS sensors to localize themselves in their environments. These sensors are ineffective in underwater environments due to signal attenuation. Autonomous underwater vehicles utilize one or more of the following approaches for successful localization and navigation: inertial/dead-reckoning, acoustic signals, and geophysical data. This thesis examines autonomous localization in a simulated environment for an OpenROV Underwater Drone using a Kalman Filter. This filter performs state estimation for a dead reckoning system exhibiting an additive error in location measurements. We evaluate the accuracy of this Kalman Filter by analyzing the effect each parameter has on accuracy, then choosing the best combination of parameter values to assess the overall accuracy of the Kalman Filter. We find that the two parameters with the greatest effects on the system are the constant acceleration and the measurement uncertainty of the system. We find the filter employing the best combination of parameters can greatly reduce measurement error and improve accuracy under typical operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
3

Grymin, David J. "Development of a novel method for autonomous navigation and landing of unmanned aerial vehicles /." Online version of thesis, 2009. http://hdl.handle.net/1850/10615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Beyers, Coenraad Johannes. "Motion planning algorithms for autonomous navigation for a rotary-wing UAV." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80231.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2013.<br>ENGLISH ABSTRACT: This project concerns motion planning for a rotary wing UAV, where vehicle controllers are already in place, and map data is readily available to a collision detection module. In broad terms, the goal of the motion planning algorithm is to provide a safe (i.e. obstacle free) flight path between an initial- and goal waypoint. This project looks at two specific motion planning algorithms, the Rapidly Exploring Random Tree (or RRT*), and the Probabilistic Roadmap Method (or PRM). The primary focus of this project is learning how these algorithms behave in specific environments and an in depth analysis is done on their differences. A secondary focus is the execution of planned paths via a Simulink simulation and lastly, this project also looks at the effect of path replanning. The work done in this project enables a rotary wing UAV to autonomously navigate an uncertain, dynamic and cluttered environment. The work also provides insight into the choice of an algorithm for a given environment: knowing which algorithm performs better can save valuable processing time and will make the entire system more responsive.<br>AFRIKAANSE OPSOMMING: ’n Tipiese vliegstuuroutomaat is daartoe in staat om ’n onbemande lugvaartvoertuig (UAV) so te stuur dat ’n stel gedefinieerde punte gevolg word. Die punte moet egter vooraf beplan word, en indien enige verandering nodig is (bv. as gevolg van veranderinge in die omgewing) is dit nodig dat ’n menslike operateur betrokke moet raak. Vir voertuie om ten volle outonoom te kan navigeer, moet die voertuig in staat wees om te kan reageer op veranderende situasies. Vir hierdie doel word kinodinamiese beplanningsalgoritmes en konflikdeteksiemetodes gebruik. Hierdie projek behels kinodinamiese beplanningsalgoritmes vir ’n onbemande helikopter, waar die beheerders vir die voertuig reeds in plek is, en omgewingsdata beskikbaar is vir ’n konflikdeteksie-module. In breë terme is die doel van die kinodinamiese beplanningsalgoritme om ’n veilige (d.w.s ’n konflikvrye) vlugpad tussen ’n begin- en eindpunt te vind. Hierdie projek kyk na twee spesifieke kinodinamiese beplanningsalgoritmes, die “Rapidly exploring Random Tree*” (of RRT*), en die “Probabilistic Roadmap Method” (of PRM). Die primêre fokus van hierdie projek is om die gedrag van hierdie algoritmes in spesifieke omgewings te analiseer en ’n volledige analise te doen op hul verskille. ’n Sekondêre fokus is die uitvoering van ’n beplande vlugpad d.m.v ’n Simulink-simulasie, en laastens kyk hierdie projek ook na die effek van padherbeplanning. Die werk wat gedoen is in hierdie projek stel ’n onbemande helikopter in staat om outonoom te navigeer in ’n onsekere, dinamiese en besige omgewing. Die werk bied ook insig in die keuse van ’n algoritme vir ’n gegewe omgewing: om te weet watter algoritme beter uitvoertye het kan waardevolle verwerkingstyd bespaar, en verseker dat die hele stelsel vinniger kan reageer.
APA, Harvard, Vancouver, ISO, and other styles
5

Rowley, Dale D. "Real-time Evaluation of Vision-based Navigation for Autonomous Landing of a Rotorcraft Unmanned Aerial Vehicle in a Non-cooperative Environment." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd697.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Leroy, Pierre. "Représentation spatiale à des fins d'assistance à la navigation embarquée par association mixte de capteurs." Electronic Thesis or Diss., Paris, ENSAM, 2025. http://www.theses.fr/2025ENAME016.

Full text
Abstract:
Les forêts, en particulier les forêts plantées, jouent un rôle essentiel en tant que puits de carbone et ressources économiques. Leur gestion durable est toutefois complexe en raison de défis majeurs tels que les incendies et les maladies, ainsi que de contraintes continues liées à des facteurs multifactoriels comme les conditions climatiques ou les forces environnementales, telles que le vent, qui influencent le cycle de croissance des forêts. Ces dynamiques nécessitent la collecte de données détaillées, souvent difficiles à obtenir. Dans ce contexte, la navigation autonome de drones sous la canopée se présente comme une solution prometteuse pour une collecte de données plus efficace, suscitant un intérêt croissant dans la communauté robotique.Ce travail de recherche s'articule autour de trois axes principaux. Le premier axe consiste en le développement d'un système de segmentation d'images en temps réel intégré à un système embarqué utilisant une caméra stéréoscopique. Le deuxième axe concerne l'exploitation des résultats de cette segmentation pour construire une cartographie sémantique et permettre une localisation précise du drone. Enfin, le troisième axe se concentre sur la mise en place de stratégies de navigation et de planification de trajectoire basées sur l'analyse de cette cartographie, afin de répondre de manière sécurisée aux objectifs opérationnels.Les contributions majeures de cette recherche incluent une innovation dans le positionnement des capteurs, notamment avec l'utilisation originale d'un LiDAR positionné verticalement, ce qui a nécessité le développement de nouvelles méthodes de perception en temps réel. De plus, un algorithme de SLAM sémantique a été développé, exploitant la segmentation d'images et les caractéristiques géométriques du pin maritime. Enfin, une stratégie de navigation basée sur les champs de potentiels a été mise en œuvre, permettant une navigation sécurisée à travers des points de sécurité positionnés automatiquement en fonction des ordres de mission.Ces travaux ouvrent de nouvelles perspectives pour la gestion durable des forêts, en améliorant la précision et l'efficacité de la collecte de données à l'échelle individuelle des arbres<br>Planted forests, play a crucial role as carbon sinks and economic resources. However, their sustainable management is complex due to major challenges such as fires and diseases, as well as requirements of different nature, such as climatic conditions or environmental forces, such as wind, which impact forest growth cycles and require detailed information that is often difficult to collect. Autonomous drone navigation beneath the canopy emerges as a promising solution for efficient data collection, responding to the growing interest in this field within the robotics community.This research is structured around three main axes. The first axis involves the development of a real-time image segmentation system integrated into an embedded system using a stereoscopic camera. The second axis focuses on leveraging the results of this segmentation to build a semantic map and enable efficient drone localization. Finally, the third axis focuses on navigation and path planning strategies based on the analysis of this map to meet operational objectives safely.The major contributions of this research include an innovation in sensor positioning, notably the original use of a vertically positioned LiDAR, which required the development of new real-time perception methods. Additionally, a semantic SLAM algorithm was developed, leveraging image segmentation and the geometric characteristics of maritime pine trees. Finally, a navigation strategy based on potential fields was implemented, enabling safe navigation through automatically positioned safety points according to the mission orders.This work offers new perspectives for sustainable forest management by improving the precision and efficiency of data collection at the individual tree level
APA, Harvard, Vancouver, ISO, and other styles
7

Lizarraga, Mariano I. "Autonomous landing system for a UAV." Thesis, Monterey California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1655.

Full text
Abstract:
Approved for public release, distribution is unlimited<br>This thesis is part of an ongoing research conducted at the Naval Postgraduate School to achieve the autonomous shipboard landing of Unmanned Aerial Vehicles (UAV). Two main problems are addressed in this thesis. The first is to establish communication between the UAV's ground station and the Autonomous Landing Flight Control Computer effectively. The second addresses the design and implementation of an autonomous landing controller using classical control techniques. Device drivers for the sensors and the communications protocol were developed in ANSI C. The overall system was implemented in a PC104 computer running a real-time operating system developed by The Mathworks, Inc. Computer and hardware in the loop (HIL) simulation, as well as ground test results show the feasibility of the algorithm proposed here. Flight tests are scheduled to be performed in the near future.<br>Lieutenant Junior Grade, Mexican Navy
APA, Harvard, Vancouver, ISO, and other styles
8

Mercado-Ravell, Diego Alberto. "Autonomous navigation and teleoperation of unmanned aerial vehicles using monocular vision." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2239/document.

Full text
Abstract:
Ce travail porte, de façon théorétique et pratique, sur les sujets plus pertinents autour des drones en navigation autonome et semi-autonome. Conformément à la nature multidisciplinaire des problèmes étudies, une grande diversité des techniques et théories ont été couverts dans les domaines de la robotique, l’automatique, l’informatique, la vision par ordinateur et les systèmes embarques, parmi outres.Dans le cadre de cette thèse, deux plates-formes expérimentales ont été développées afin de valider la théorie proposée pour la navigation autonome d’un drone. Le premier prototype, développé au laboratoire, est un quadrirotor spécialement conçu pour les applications extérieures. La deuxième plate-forme est composée d’un quadrirotor à bas coût du type AR.Drone fabrique par Parrot. Le véhicule est connecté sans fil à une station au sol équipé d’un système d’exploitation pour robots (ROS) et dédié à tester, d’une façon facile, rapide et sécurisé, les algorithmes de vision et les stratégies de commande proposés. Les premiers travaux développés ont été basés sur la fusion de donnés pour estimer la position du drone en utilisant des capteurs inertiels et le GPS. Deux stratégies ont été étudiées et appliquées, le Filtre de Kalman Etendu (EKF) et le filtre à Particules (PF). Les deux approches prennent en compte les mesures bruitées de la position de l’UAV, de sa vitesse et de son orientation. On a réalisé une validation numérique pour tester la performance des algorithmes. Une tâche dans le cahier de cette thèse a été de concevoir d’algorithmes de commande pour le suivi de trajectoires ou bien pour la télé-opération. Pour ce faire, on a proposé une loi de commande basée sur l’approche de Mode Glissants à deuxième ordre. Cette technique de commande permet de suivre au quadrirotor de trajectoires désirées et de réaliser l’évitement des collisions frontales si nécessaire. Etant donné que la plate-forme A.R.Drone est équipée d’un auto-pilote d’attitude, nous avons utilisé les angles désirés de roulis et de tangage comme entrées de commande. L’algorithme de commande proposé donne de la robustesse au système en boucle fermée. De plus, une nouvelle technique de vision monoculaire par ordinateur a été utilisée pour la localisation d’un drone. Les informations visuelles sont fusionnées avec les mesures inertielles du drone pour avoir une bonne estimation de sa position. Cette technique utilise l’algorithme PTAM (localisation parallèle et mapping), qui s’agit d’obtenir un nuage de points caractéristiques dans l’image par rapport à une scène qui servira comme repère. Cet algorithme n’utilise pas de cibles, de marqueurs ou de scènes bien définies. La contribution dans cette méthodologie a été de pouvoir utiliser le nuage de points disperse pour détecter possibles obstacles en face du véhicule. Avec cette information nous avons proposé un algorithme de commande pour réaliser l’évitement d’obstacles. Cette loi de commande utilise les champs de potentiel pour calculer une force de répulsion qui sera appliquée au drone. Des expériences en temps réel ont montré la bonne performance du système proposé. Les résultats antérieurs ont motivé la conception et développement d’un drone capable de réaliser en sécurité l’interaction avec les hommes et les suivre de façon autonome. Un classificateur en cascade du type Haar a été utilisé pour détecter le visage d’une personne. Une fois le visage est détecté, on utilise un filtre de Kalman (KF) pour améliorer la détection et un algorithme pour estimer la position relative du visage. Pour réguler la position du drone et la maintenir à une distance désirée du visage, on a utilisé une loi de commande linéaire<br>The present document addresses, theoretically and experimentally, the most relevant topics for Unmanned Aerial Vehicles (UAVs) in autonomous and semi-autonomous navigation. According with the multidisciplinary nature of the studied problems, a wide range of techniques and theories are covered in the fields of robotics, automatic control, computer science, computer vision and embedded systems, among others. As part of this thesis, two different experimental platforms were developed in order to explore and evaluate various theories and techniques of interest for autonomous navigation. The first prototype is a quadrotor specially designed for outdoor applications and was fully developed in our lab. The second testbed is composed by a non expensive commercial quadrotor kind AR. Drone, wireless connected to a ground station equipped with the Robot Operating System (ROS), and specially intended to test computer vision algorithms and automatic control strategies in an easy, fast and safe way. In addition, this work provides a study of data fusion techniques looking to enhance the UAVs pose estimation provided by commonly used sensors. Two strategies are evaluated in particular, an Extended Kalman Filter (EKF) and a Particle Filter (PF). Both estimators are adapted for the system under consideration, taking into account noisy measurements of the UAV position, velocity and orientation. Simulations show the performance of the developed algorithms while adding noise from real GPS (Global Positioning System) measurements. Safe and accurate navigation for either autonomous trajectory tracking or haptic teleoperation of quadrotors is presented as well. A second order Sliding Mode (2-SM) control algorithm is used to track trajectories while avoiding frontal collisions in autonomous flight. The time-scale separation of the translational and rotational dynamics allows us to design position controllers by giving desired references in the roll and pitch angles, which is suitable for quadrotors equipped with an internal attitude controller. The 2-SM control allows adding robustness to the closed-loop system. A Lyapunov based analysis probes the system stability. Vision algorithms are employed to estimate the pose of the vehicle using only a monocular SLAM (Simultaneous Localization and Mapping) fused with inertial measurements. Distance to potential obstacles is detected and computed using the sparse depth map from the vision algorithm. For teleoperation tests, a haptic device is employed to feedback information to the pilot about possible collisions, by exerting opposite forces. The proposed strategies are successfully tested in real-time experiments, using a low-cost commercial quadrotor. Also, conception and development of a Micro Aerial Vehicle (MAV) able to safely interact with human users by following them autonomously, is achieved in the present work. Once a face is detected by means of a Haar cascade classifier, it is tracked applying a Kalman Filter (KF), and an estimation of the relative position with respect to the face is obtained at a high rate. A linear Proportional Derivative (PD) controller regulates the UAV’s position in order to keep a constant distance to the face, employing as well the extra available information from the embedded UAV’s sensors. Several experiments were carried out through different conditions, showing good performance even under disadvantageous scenarios like outdoor flight, being robust against illumination changes, wind perturbations, image noise and the presence of several faces on the same image. Finally, this thesis deals with the problem of implementing a safe and fast transportation system using an UAV kind quadrotor with a cable suspended load. The objective consists in transporting the load from one place to another, in a fast way and with minimum swing in the cable
APA, Harvard, Vancouver, ISO, and other styles
9

Hattenberger, Gautier. "Vol en formation sans formation : contrôle et planification pour le vol en formation des avions sans pilote." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00353676.

Full text
Abstract:
L'objet de cette thèse est l'étude et la mise en oeuvre d'un système de gestion automatique de la configuration d'une formation d'avions sans pilote, ou drones. Les objectifs sont, d'une part, d'améliorer la sécurité et l'efficacité d'un groupe de drones de combat, et, d'autre part, de faire le lien entre les niveaux de planification de missions et les niveaux fonctionnels de contrôle de la formation. Le vol en formation est particulièrement bien adapté pour des applications militaires en milieux hostiles, qui requièrent des synchronisations pour l'arrivée sur les cibles ou du support mutuel pour le brouillage. L'une des difficultés soulevées est le choix autonome de la configuration. Notre approche est de mettre en oeuvre, entre les niveaux décisionnels et les niveaux fonctionnels, une couche intermédiaire dédiée à la formation et à la gestion autonome de sa configuration. La configuration ainsi déterminée doit être affectée aux avions de la formation en tenant compte des contraintes tactiques et des ressources de chacun. Enfin, la sécurité du vol est un élément primordial. Il faut donc pouvoir planifier des manoeuvres de reconfiguration pour passer d'une configuration à une autre, en respectant les distances minimales entres avions. Des solutions ont été développées à partir de l'algorithme Branch & Bound pour résoudre les problèmes d'allocations, et de l'algorithme A* pour la planification de trajectoires dans la formation. De plus, un contrôle de vol de la formation a été implémenté. Ceci a permis de valider notre approche par des simulations et des expérimentations réelles.
APA, Harvard, Vancouver, ISO, and other styles
10

Le, Barz Cédric. "Navigation visuelle pour les missions autonomes des petits drones." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066424/document.

Full text
Abstract:
Lors de dette dernière décennie, l'évolution des technologies a permis le développement de drones de taille et de poids réduit aptes à évoluer dans des environnements intérieurs ou urbains. Pour exécuter les missions qui leur sont attribuées, les drones doivent posséder un système de navigation robuste, comprenant, notamment, une fonctionnalité temps réel d'ego-localisation précise dans un repère absolu. Nous proposons de résoudre cette problématique par la mise en correspondance des dernières images acquises avec des images géoréférencées de type Google Streetview.Dans l'hypothèse où il serait possible pour une image requête de retrouver l'image géo-référencée représentant la même scène, nous avons tout d'abord étudié une solution permettant d'affiner la localisation grâce à l'estimation de la pose relative entre ces deux images. Pour retrouver l'image de la base correspondant à une image requête, nous avons ensuite étudié et proposé une méthode hybride exploitant à la fois les informations visuelles et odométriques mettant en oeuvre une chaîne de Markov à états cachés. Les performances obtenues, dépendant de la qualité de la mesure de similarité visuelle, nous avons enfin proposé une solution originale basée sur l'apprentissage supervisé de distances permettant de mesurer les similarités entre les images requête et les images géoréférencées proches de la position supposée<br>In this last decade, technology evolution has enabled the development of small and light UAV able to evolve in indoor and urban environments. In order to execute missions assigned to them, UAV must have a robust navigation system, including a precise egolocalization functionality within an absolute reference. We propose to solve this problem by mapping the latest images acquired with geo-referenced images, i.e. Google Streetview images.In a first step, assuming that it is possible for a given query image to retrieve the geo-referenced image depicting the same scene, we study a solution, based on relative pose estimation between images, to refine the location. Then, to retrieve geo-referenced images corresponding to acquired images, we studied and proposed an hybrid method exploiting both visual and odometric information by defining an appropriate Hidden Markov Model (HMM), where states are geographical locations. The quality of achieved performances depending of visual similarities, we finally proposed an original solution based on a supervised metric learning solution. The solution measures similarities between the query images and geo-referenced images close to the putative position, thanks to distances learnt during a preliminary step
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Autonomous drone navigation"

1

Facility, Dryden Flight Research, ed. Autonomous RPRV navigation guidance and control. Systems Technology, Inc., 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

United States. National Aeronautics and Space Administration., ed. Search problems in mission planning and navigation of autonomous aircraft. Purdue University, School of Aeronautics and Astronautics, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Castillo-Garcia, Pedro, Laura Elena Munoz Hernandez, and Pedro Garcia Gil. Indoor Navigation Strategies for Aerial Autonomous Systems. Elsevier Science & Technology Books, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Castillo-Garcia, Pedro, Laura Elena Munoz Hernandez, and Pedro Garcia Gil. Indoor Navigation Strategies for Aerial Autonomous Systems. Elsevier Science & Technology, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Autonomous drone navigation"

1

Veerawal, Sumit, Shashank Bhushan, Mohak Raja Mansharamani, and Bishwajit Sharma. "Vision Based Autonomous Drone Navigation Through Enclosed Spaces." In Communications in Computer and Information Science. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1103-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Makkar, Mohit, Shagun Chandel, Rahul Singhal, and Pushpendra Kumar. "Analysis of PID Control Method for Autonomous Drone Navigation." In Lecture Notes in Mechanical Engineering. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-3651-5_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Power, William, Martin Pavlovski, Daniel Saranovic, Ivan Stojkovic, and Zoran Obradovic. "Autonomous Navigation for Drone Swarms in GPS-Denied Environments Using Structured Learning." In IFIP Advances in Information and Communication Technology. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49186-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karatzas, Andreas, Aristeidis Karras, Christos Karras, Konstantinos C. Giotopoulos, Konstantinos Oikonomou, and Spyros Sioutas. "On Autonomous Drone Navigation Using Deep Learning and an Intelligent Rainbow DQN Agent." In Intelligent Data Engineering and Automated Learning – IDEAL 2022. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-21753-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Adam, Zourari, My Abdelkader Youssefi, Youssef Ben Youssef, Rachid Dakir, and Mohamed BAKIR. "Enhancing Autonomous Drone Navigation in Unfamiliar Environments with Predictive PID Control and Neural Network Integration." In Sustainable Civil Infrastructures. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70992-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blubaugh, David Allen, Steven D. Harbour, Benjamin Sears, and Michael J. Findler. "Navigation, SLAM, and Goals." In Intelligent Autonomous Drones with Cognitive Deep Learning. Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-6803-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Santamaria-Navarro, Angel, Rohan Thakker, David D. Fan, Benjamin Morrell, and Ali-akbar Agha-mohammadi. "Towards Resilient Autonomous Navigation of Drones." In Springer Proceedings in Advanced Robotics. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95459-8_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jacob, Billy, Abhishek Kaushik, and Pankaj Velavan. "Autonomous Navigation of Drones Using Reinforcement Learning." In Advances in Augmented Reality and Virtual Reality. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7220-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yasin, Jawad N., Sherif A. S. Mohamed, Mohammad-Hashem Haghbayan, Jukka Heikkonen, Hannu Tenhunen, and Juha Plosila. "Navigation of Autonomous Swarm of Drones Using Translational Coordinates." In Advances in Practical Applications of Agents, Multi-Agent Systems, and Trustworthiness. The PAAMS Collection. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49778-1_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhong, Zicheng, Zhiyan Lin, Xiangzhou Jiang, Chenhao Wang, Chao Zhang, and Zichang Li. "Autonomous Piloting Algorithms for Drones in Underground Space with No GPS Navigation." In Mechanisms and Machine Science. Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-77489-8_79.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Autonomous drone navigation"

1

Lodi, M. K., Sudeshna Sarkar, and Atul Dadhich. "Autonomous Drone Navigation in Dynamic and Uncertain Environments." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10724753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Darapureddy, Aditya, Ashok Guravana, T. Daniya, Dinesh Potunuru, Boyina Rajesh, and Amrutha Vugiri. "Dronav: Deep Learning Based Autonomous Drone Navigation System." In 2024 International Conference on Smart Technologies for Sustainable Development Goals (ICSTSDG). IEEE, 2024. https://doi.org/10.1109/icstsdg61998.2024.11026630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Viranshu, Swati Gupta, and M. K. Ranganathaswamy. "Autonomous Drone Navigation Based on Real-Time Global Optimization." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10724412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Deeplata, B. Spoorthi, and Arvind Kumar Pandey. "Autonomous Drone Navigation in Closed Environments Using SLAM Techniques." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10723878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hegde, Pratibha Vittal, Mohammed Riyaz Ahmed, and Abdul Haq Nalband. "The Importance of Data Annotation for Autonomous Drone Navigation." In 2024 Third International Conference on Trends in Electrical, Electronics, and Computer Engineering (TEECCON). IEEE, 2024. https://doi.org/10.1109/teeccon64024.2024.10939119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Contreras Carlos, Jesús Abimael, Rodolfo Villalobos Salazar, Neftalí Jonatán González Yances, and América Berenice Morales Díaz. "Autonomous Drone Navigation using Monocular ORB-SLAM2 in Structured Environments." In 2024 XXVI Robotics Mexican Congress (COMRob). IEEE, 2024. https://doi.org/10.1109/comrob64055.2024.10777449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Hongqian, Yun Tang, Antonios Tsourdos, and Weisi Guo. "Contextualized Autonomous Drone Navigation Using LLMs Deployed in Edge-Cloud Computing." In 2025 International Conference on Machine Learning and Autonomous Systems (ICMLAS). IEEE, 2025. https://doi.org/10.1109/icmlas64557.2025.10967934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, Anup, Meenakshi Dheer, and Nagaraj Patil. "Fusion of Lidar and Radial Optical Flow for Autonomous Drone Navigation." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10724465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Swarnalatha, Anumula, Nalluri Archana, Naveen Mukkapati, Subramanian Selvakumar, Eneyachew Tamir, and Manikandaprabu P. "Quantum Generative Adversarial Network for Autonomous Drone Navigation in Urban Wind Zones." In 2025 6th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI). IEEE, 2025. https://doi.org/10.1109/icmcsi64620.2025.10883107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mohan, Hemanth, Randhir Dinesh, Abhiraj S Kumar, Anwin Mathai, and Deepak S. "Enhanced YOLOv8- Driven Traffic Rule Violation Detection with ROS-Integrated Autonomous Drone Navigation." In 2024 IEEE Recent Advances in Intelligent Computational Systems (RAICS). IEEE, 2024. http://dx.doi.org/10.1109/raics61201.2024.10689890.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Autonomous drone navigation"

1

Alexander, Serena, Bo Yang, Owen Hussey, and Derek Hicks. Examining the Externalities of Highway Capacity Expansions in California: An Analysis of Land Use and Land Cover (LULC) Using Remote Sensing Technology. Mineta Transportation Institute, 2023. http://dx.doi.org/10.31979/mti.2023.2251.

Full text
Abstract:
There are over 590,000 bridges dispersed across the roadway network that stretches across the United States alone. Each bridge with a length of 20 feet or greater must be inspected at least once every 24 months, according to the Federal Highway Act (FHWA) of 1968. This research developed an artificial intelligence (AI)-based framework for bridge and road inspection using drones with multiple sensors collecting capabilities. It is not sufficient to conduct inspections of bridges and roads using cameras alone, so the research team utilized an infrared (IR) camera along with a high-resolution optical camera. In many instances, the IR camera can provide more details to the interior structural damages of a bridge or a road surface than an optical camera, which is more suitable for inspecting damages on the surface of a bridge or a road. In addition, the drone inspection system is equipped with a minicomputer that runs Machine Learning algorithms. These algorithms enable autonomous drone navigation, image capture of the bridge or road structure, and analysis of the images. Whenever any damage is detected, the location coordinates are saved. Thus, the drone can self-operate and carry out the inspection process using advanced AI algorithms developed by the research team. The experimental results reveal the system can detect potholes with an average accuracy of 84.62% using the visible light camera and 95.12% using a thermal camera. This developed bridge and road inspection framework can save time, money, and lives by automating and having drones conduct major inspection operations in place of humans.
APA, Harvard, Vancouver, ISO, and other styles
2

Kulhandjian, Hovannes. AI-Based Bridge and Road Inspection Framework Using Drones. Mineta Transportation Institute, 2023. http://dx.doi.org/10.31979/mti.2023.2226.

Full text
Abstract:
There are over 590,000 bridges dispersed across the roadway network that stretches across the United States alone. Each bridge with a length of 20 feet or greater must be inspected at least once every 24 months, according to the Federal Highway Act (FHWA) of 1968. This research developed an artificial intelligence (AI)-based framework for bridge and road inspection using drones with multiple sensors collecting capabilities. It is not sufficient to conduct inspections of bridges and roads using cameras alone, so the research team utilized an infrared (IR) camera along with a high-resolution optical camera. In many instances, the IR camera can provide more details to the interior structural damages of a bridge or a road surface than an optical camera, which is more suitable for inspecting damages on the surface of a bridge or a road. In addition, the drone inspection system is equipped with a minicomputer that runs Machine Learning algorithms. These algorithms enable autonomous drone navigation, image capture of the bridge or road structure, and analysis of the images. Whenever any damage is detected, the location coordinates are saved. Thus, the drone can self-operate and carry out the inspection process using advanced AI algorithms developed by the research team. The experimental results reveal the system can detect potholes with an average accuracy of 84.62% using the visible light camera and 95.12% using a thermal camera. This developed bridge and road inspection framework can save time, money, and lives by automating and having drones conduct major inspection operations in place of humans.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography