To see the other types of publications on this topic, follow the link: Vision-Based Aircraft Detection.

Journal articles on the topic 'Vision-Based Aircraft Detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 journal articles for your research on the topic 'Vision-Based Aircraft Detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Holak, Krzysztof, and Wojciech Obrocki. "Vision-based damage detection of aircraft engine’s compressor blades." Diagnostyka 22, no. 3 (August 31, 2021): 83–90. http://dx.doi.org/10.29354/diag/141589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramalingam, Balakrishnan, Vega-Heredia Manuel, Mohan Rajesh Elara, Ayyalusami Vengadesh, Anirudh Krishna Lakshmanan, Muhammad Ilyas, and Tan Jun Yuan James. "Visual Inspection of the Aircraft Surface Using a Teleoperated Reconfigurable Climbing Robot and Enhanced Deep Learning Technique." International Journal of Aerospace Engineering 2019 (September 12, 2019): 1–14. http://dx.doi.org/10.1155/2019/5137139.

Full text
Abstract:
Aircraft surface inspection includes detecting surface defects caused by corrosion and cracks and stains from the oil spill, grease, dirt sediments, etc. In the conventional aircraft surface inspection process, human visual inspection is performed which is time-consuming and inefficient whereas robots with onboard vision systems can inspect the aircraft skin safely, quickly, and accurately. This work proposes an aircraft surface defect and stain detection model using a reconfigurable climbing robot and an enhanced deep learning algorithm. A reconfigurable, teleoperated robot, named as “Kiropter,” is designed to capture the aircraft surface images with an onboard RGB camera. An enhanced SSD MobileNet framework is proposed for stain and defect detection from these images. A Self-filtering-based periodic pattern detection filter has been included in the SSD MobileNet deep learning framework to achieve the enhanced detection of the stains and defects on the aircraft skin images. The model has been tested with real aircraft surface images acquired from a Boeing 737 and a compact aircraft’s surface using the teleoperated robot. The experimental results prove that the enhanced SSD MobileNet framework achieves improved detection accuracy of aircraft surface defects and stains as compared to the conventional models.
APA, Harvard, Vancouver, ISO, and other styles
3

James, Jasmin, Jason J. Ford, and Timothy L. Molloy. "Quickest Detection of Intermittent Signals With Application to Vision-Based Aircraft Detection." IEEE Transactions on Control Systems Technology 27, no. 6 (November 2019): 2703–10. http://dx.doi.org/10.1109/tcst.2018.2872468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Liming, Haoxin Yan, Yingzi Shan, Chang Zheng, Yang Liu, Xianyu Zuo, and Baojun Qiao. "Aircraft Detection for Remote Sensing Images Based on Deep Convolutional Neural Networks." Journal of Electrical and Computer Engineering 2021 (August 11, 2021): 1–16. http://dx.doi.org/10.1155/2021/4685644.

Full text
Abstract:
Aircraft detection for remote sensing images, as one of the fields of computer vision, is one of the significant tasks of image processing based on deep learning. Recently, many high-performance algorithms for aircraft detection have been developed and applied in different scenarios. However, the proposed algorithms still have a series of problems; for instance, the algorithms will miss some small-scale aircrafts when applied to the remote sensing image. There are two main reasons for the problem; one reason is that the aircrafts in the remote sensing image are usually small in size, leading to detecting difficulty. The other reason is that the background of the remote sensing image is usually complex, so the algorithms applied to the scenario are easy to be affected by the background. To address the problem of small size, this paper proposes the Multiscale Detection Network (MSDN) which introduces a multiscale detection architecture to detect small-scale aircrafts. With the intention to resist the background noise, this paper proposes the Deeper and Wider Module (DAWM) which increases the perceptual field of the network to alleviate the affection. Besides, to address the two problems simultaneously, this paper introduces the DAWM into the MSDN and names the novel network structure as Multiscale Refined Detection Network (MSRDN). The experimental results show that the MSRDN method has detected the small-scale aircrafts that other algorithms missed and the performance indicators have higher performance than other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Molloy, Timothy L., Jason J. Ford, and Luis Mejias. "Detection of aircraft below the horizon for vision-based detect and avoid in unmanned aircraft systems." Journal of Field Robotics 34, no. 7 (May 16, 2017): 1378–91. http://dx.doi.org/10.1002/rob.21719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Zhenyu, Yanyun Wang, Wei Cheng, Tao Chen, and Hui Zhou. "A monocular vision system based on cooperative targets detection for aircraft pose measurement." Journal of Physics: Conference Series 887 (August 2017): 012029. http://dx.doi.org/10.1088/1742-6596/887/1/012029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Minwalla, Cyrus, Dan Tulpan, Nabil Belacel, Fazel Famili, and Kristopher Ellis. "Detection of Airborne Collision-Course Targets for Sense and Avoid on Unmanned Aircraft Systems Using Machine Vision Techniques." Unmanned Systems 04, no. 04 (October 2016): 255–72. http://dx.doi.org/10.1142/s2301385016500102.

Full text
Abstract:
Detecting collision-course targets in aerial scenes from purely passive optical images is challenging for a vision-based sense-and-avoid (SAA) system. Proposed herein is a processing pipeline for detecting and evaluating collision course targets from airborne imagery using machine vision techniques. The evaluation of eight feature detectors and three spatio-temporal visual cues is presented. Performance metrics for comparing feature detectors include the percentage of detected targets (PDT), percentage of false positives (POT) and the range at earliest detection ([Formula: see text]). Contrast and motion-based visual cues are evaluated against standard models and expected spatio-temporal behavior. The analysis is conducted on a multi-year database of captured imagery from actual airborne collision course flights flown at the National Research Council of Canada. Datasets from two different intruder aircraft, a Bell 206 rotor-craft and a Harvard Mark IV trainer fixed-wing aircraft, were compared for accuracy and robustness. Results indicate that the features from accelerated segment test (FAST) feature detector shows the most promise as it maximizes the range at earliest detection and minimizes false positives. Temporal trends from visual cues analyzed on the same datasets are indicative of collision-course behavior. Robustness of the cues was established across collision geometry, intruder aircraft types, illumination conditions, seasonal environmental variations and scene clutter.
APA, Harvard, Vancouver, ISO, and other styles
8

Campa, G., M. R. Napolitano, M. Perhinschi, M. L. Fravolini, L. Pollini, and M. Mammarella. "Addressing pose estimation issues for machine vision based UAV autonomous serial refuelling." Aeronautical Journal 111, no. 1120 (June 2007): 389–96. http://dx.doi.org/10.1017/s0001924000004644.

Full text
Abstract:
Abstract This paper describes the results of an effort on the analysis of the performance of specific ‘pose estimation’ algorithms within a Machine Vision-based approach for the problem of aerial refuelling for unmanned aerial vehicles. The approach assumes the availability of a camera on the unmanned aircraft for acquiring images of the refuelling tanker; also, it assumes that a number of active or passive light sources – the ‘markers’ – are installed at specific known locations on the tanker. A sequence of machine vision algorithms on the on-board computer of the unmanned aircraft is tasked with the processing of the images of the tanker. Specifically, detection and labeling algorithms are used to detect and identify the markers and a ‘pose estimation’ algorithm is used to estimate the relative position and orientation between the two aircraft. Detailed closed-loop simulation studies have been performed to compare the performance of two ‘pose estimation’ algorithms within a simulation environment that was specifically developed for the study of aerial refuelling problems. Special emphasis is placed on the analysis of the required computational effort as well as on the accuracy and the error propagation characteristics of the two methods. The general trade offs involved in the selection of the pose estimation algorithm are discussed. Finally, simulation results are presented and analysed.
APA, Harvard, Vancouver, ISO, and other styles
9

Chahl, Javaan, and Aakash Dawadee. "Towards an Optical Aircraft Navigation System." Applied Mechanics and Materials 629 (October 2014): 321–26. http://dx.doi.org/10.4028/www.scientific.net/amm.629.321.

Full text
Abstract:
Navigation by means that are fully self contained, without the weight and cost of high performance inertial navigation units is highly desirable in many applications both military and civilian. In this paper we introduce a suite of sensors and behaviors that include: the means to reduce lateral drift due to wind using optical flow, detection of a constellation of landmarks using a machine vision system, and a polarization compass that is reliable at extreme latitudes based on polarization. In a series of flight trials and detailed simulations we have demonstrated that a combination of these functions achieves purely optical navigation with simplicity and robustness.
APA, Harvard, Vancouver, ISO, and other styles
10

Foo, Simon Y. "A rule-based machine vision system for fire detection in aircraft dry bays and engine compartments." Knowledge-Based Systems 9, no. 8 (December 1996): 531–40. http://dx.doi.org/10.1016/s0950-7051(96)00005-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Andreev, D. S. "OBJECT DETECTION METHOD APPLICATION TO RUNWAY IMAGERY IN LOW VISIBILITY CONDITIONS." Journal of the Russian Universities. Radioelectronics, no. 1 (February 28, 2019): 17–28. http://dx.doi.org/10.32603/1993-8985-2019-22-1-17-28.

Full text
Abstract:
When ensuring aviation safety, the outboard environment awareness of the crew in low visibility conditions is especially important. The information about the runway condition and availability of any obstacles is crucial. There are ground-based obstacle detection systems, but currently only large airports are equipped with them. There are Enhanced Vision Systems designed for application on aircraft in low visibility conditions. The main goal of this research is to develop the means of runway obstacle recognition in low visibility conditions, which are to improve the capabilities of Enhanced Vision Systems. The research covers only the methods for static image object detection. The analysis of the runway markings, objects and possible obstacles is performed. Targets for acquisition are defined. The simulation of runway images is performed on full-flight simulator in low visibility conditions. The requirements for features descriptors, recognition and detection methods are defined and methods for research are defined. The paper provides evaluation of method applicability to runway pictures taken in poor visibility conditions above and below the decision height taking into account various characteristics. The covered methods solve the problem of detecting objects of the runway in low visibility conditions for static image. Conclusions about the possibility to use the studied methods in Enhanced Vision Systems are made. Further development of optimization methods is required to perform detection in video sequences in real time. The results of this work are relevant to the tasks of avionics, computer vision and image processing.
APA, Harvard, Vancouver, ISO, and other styles
12

Ruiz, Leandro, Manuel Torres, Alejandro Gómez, Sebastián Díaz, José M. González, and Francisco Cavas. "Detection and Classification of Aircraft Fixation Elements during Manufacturing Processes Using a Convolutional Neural Network." Applied Sciences 10, no. 19 (September 29, 2020): 6856. http://dx.doi.org/10.3390/app10196856.

Full text
Abstract:
The aerospace sector is one of the main economic drivers that strengthens our present, constitutes our future and is a source of competitiveness and innovation with great technological development capacity. In particular, the objective of manufacturers on assembly lines is to automate the entire process by using digital technologies as part of the transition toward Industry 4.0. In advanced manufacturing processes, artificial vision systems are interesting because their performance influences the liability and productivity of manufacturing processes. Therefore, developing and validating accurate, reliable and flexible vision systems in uncontrolled industrial environments is a critical issue. This research deals with the detection and classification of fasteners in a real, uncontrolled environment for an aeronautical manufacturing process, using machine learning techniques based on convolutional neural networks. Our system achieves 98.3% accuracy in a processing time of 0.8 ms per image. The results reveal that the machine learning paradigm based on a neural network in an industrial environment is capable of accurately and reliably estimating mechanical parameters to improve the performance and flexibility of advanced manufacturing processing of large parts with structural responsibility.
APA, Harvard, Vancouver, ISO, and other styles
13

Fadhil, Ahmed F., Raghuveer Kanneganti, Lalit Gupta, Henry Eberle, and Ravi Vaidyanathan. "Fusion of Enhanced and Synthetic Vision System Images for Runway and Horizon Detection." Sensors 19, no. 17 (September 3, 2019): 3802. http://dx.doi.org/10.3390/s19173802.

Full text
Abstract:
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.
APA, Harvard, Vancouver, ISO, and other styles
14

Akbari Sekehravani, Ehsan, Eduard Babulak, and Mehdi Masoodi. "Flying object tracking and classification of military versus nonmilitary aircraft." Bulletin of Electrical Engineering and Informatics 9, no. 4 (August 1, 2020): 1394–403. http://dx.doi.org/10.11591/eei.v9i4.1843.

Full text
Abstract:
Tracking of moving objects in a sequence of images is one of the important and functional branches of machine vision technology. Detection and tracking of a flying object with unknown features are important issues in detecting and tracking objects. This paper consists of two basic parts. The first part involves tracking multiple flying objects. At first, flying objects are detected and tracked, using the particle filter algorithm. The second part is to classify tracked objects (military or nonmilitary), based on four criteria; Size (center of mass) of objects, object speed vector, the direction of motion of objects, and thermal imagery identifies the type of tracked flying objects. To demonstrate the efficiency and the strength of the algorithm and the above system, several scenarios in different videos have been investigated that include challenges such as the number of objects (aircraft), different paths, the diverse directions of motion, different speeds and various objects. One of the most important challenges is the speed of processing and the angle of imaging.
APA, Harvard, Vancouver, ISO, and other styles
15

Hiba, Antal, Levente Márk Sántha, Tamás Zsedrovits, Levente Hajder, and Akos Zarandy. "Onboard Visual Horizon Detection for Unmanned Aerial Systems with Programmable Logic." Electronics 9, no. 4 (April 4, 2020): 614. http://dx.doi.org/10.3390/electronics9040614.

Full text
Abstract:
We introduce and analyze a fast horizon detection algorithm with native radial distortion handling and its implementation on a low power field programmable gate array (FPGA) development board in this paper. The algorithm is suited for visual applications in an airborne environment, that is on board a small unmanned aircraft. The algorithm was designed to have low complexity because of the power consumption requirements. To keep the computational cost low, an initial guess for the horizon is used, which is provided by the attitude heading reference system of the aircraft. The camera model takes radial distortions into account, which is necessary for a wide-angle lens used in most applications. This paper presents formulae for distorted horizon lines and a gradient sampling-based resolution-independent single shot algorithm for finding a horizon with radial distortion without undistortion of the complete image. The implemented algorithm is part of our visual sense-and-avoid system, where it is used for the sky-ground separation, and the performance of the algorithm is tested on real flight data. The FPGA implementation of the horizon detection method makes it possible to add this efficient module to any FPGA-based vision system.
APA, Harvard, Vancouver, ISO, and other styles
16

Aust, Jonas, Antonija Mitrovic, and Dirk Pons. "Comparison of Visual and Visual–Tactile Inspection of Aircraft Engine Blades." Aerospace 8, no. 11 (October 22, 2021): 313. http://dx.doi.org/10.3390/aerospace8110313.

Full text
Abstract:
Background—In aircraft engine maintenance, the majority of parts, including engine blades, are inspected visually for any damage to ensure a safe operation. While this process is called visual inspection, there are other human senses encompassed in this process such as tactile perception. Thus, there is a need to better understand the effect of the tactile component on visual inspection performance and whether this effect is consistent for different defect types and expertise groups. Method—This study comprised three experiments, each designed to test different levels of visual and tactile abilities. In each experiment, six industry practitioners of three expertise groups inspected the same sample of N = 26 blades. A two-week interval was allowed between the experiments. Inspection performance was measured in terms of inspection accuracy, inspection time, and defect classification accuracy. Results—The results showed that unrestrained vision and the addition of tactile perception led to higher inspection accuracies of 76.9% and 84.0%, respectively, compared to screen-based inspection with 70.5% accuracy. An improvement was also noted in classification accuracy, as 39.1%, 67.5%, and 79.4% of defects were correctly classified in screen-based, full vision and visual–tactile inspection, respectively. The shortest inspection time was measured for screen-based inspection (18.134 s) followed by visual–tactile (22.140 s) and full vision (25.064 s). Dents benefited the most from the tactile sense, while the false positive rate remained unchanged across all experiments. Nicks and dents were the most difficult to detect and classify and were often confused by operators. Conclusions—Visual inspection in combination with tactile perception led to better performance in inspecting engine blades than visual inspection alone. This has implications for industrial training programmes for fault detection.
APA, Harvard, Vancouver, ISO, and other styles
17

Fentaye, Amare Desalegn, Valentina Zaccaria, and Konstantinos Kyprianidis. "Aircraft Engine Performance Monitoring and Diagnostics Based on Deep Convolutional Neural Networks." Machines 9, no. 12 (December 7, 2021): 337. http://dx.doi.org/10.3390/machines9120337.

Full text
Abstract:
The rapid advancement of machine-learning techniques has played a significant role in the evolution of engine health management technology. In the last decade, deep-learning methods have received a great deal of attention in many application domains, including object recognition and computer vision. Recently, there has been a rapid rise in the use of convolutional neural networks for rotating machinery diagnostics inspired by their powerful feature learning and classification capability. However, the application in the field of gas turbine diagnostics is still limited. This paper presents a gas turbine fault detection and isolation method using modular convolutional neural networks preceded by a physics-driven performance-trend-monitoring system. The trend-monitoring system was employed to capture performance changes due to degradation, establish a new baseline when it is needed, and generatefault signatures. The fault detection and isolation system was trained to step-by-step detect and classify gas path faults to the component level using fault signatures obtained from the physics part. The performance of the method proposed was evaluated based on different fault scenarios for a three-shaft turbofan engine, under significant measurement noise to ensure model robustness. Two comparative assessments were also carried out: with a single convolutional-neural-network-architecture-based fault classification method and with a deep long short-term memory-assisted fault detection and isolation method. The results obtained revealed the performance of the proposed method to detect and isolate multiple gas path faults with over 96% accuracy. Moreover, sharing diagnostic tasks with modular architectures is seen as relevant to significantly enhance diagnostic accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Tang, Jianming, Weidong Zhu, and Yunbo Bi. "A Computer Vision-Based Navigation and Localization Method for Station-Moving Aircraft Transport Platform with Dual Cameras." Sensors 20, no. 1 (January 3, 2020): 279. http://dx.doi.org/10.3390/s20010279.

Full text
Abstract:
In order to develop equipment adapted to the aircraft pulse final assembly line, a vision-based aircraft transport platform system is developed. This article explores a guiding method between assembly stations which is low-cost and easy to change routes by using two-dimensional code and two complementary metal oxide semiconductor (CMOS) cameras. The two cameras installed on the front and back of the platform read the two-dimensional code containing station information to guide the platform. In the process of guiding, the theoretical position and posture of the platform at each assembly station are known, but there is a difference between the actual and theoretical values due to motion errors. To reduce the influence of the deviation on the navigation route, a localization method is proposed based on the two-dimensional images captured by the cameras. Canny edge detection is applied to the processed image to obtain the position of the two-dimensional code in the image, which can measure the angle/distance deviation of the platform. Then, the computer can locate the platform precisely by the information in the two-dimensional code and the deviation measured by the image. To verify the feasibility of the proposed method, experiments have been performed on the developed platform system. The results show that the distance and angle errors of the platform are within ±10 mm and ±0.15° respectively.
APA, Harvard, Vancouver, ISO, and other styles
19

Amami, Mustafa M. "Fast and Reliable Vision-Based Navigation for Real Time Kinematic Applications." International Journal for Research in Applied Science and Engineering Technology 10, no. 2 (February 28, 2022): 922–32. http://dx.doi.org/10.22214/ijraset.2022.40395.

Full text
Abstract:
Abstract: Automatic Image Matching (AIM) is the term used to identify the automatic detection of corresponding points located on the overlapping areas of multiple images. AIM is extensively used with Mobile Mapping System (MMS) for different engineering applications, such as highway infrastructure mapping, monitoring of road surface quality and markings, telecommunication, emergency response, and collecting data for Geographical Information Systems (GIS). Robotics community and Simultaneous Localization And Mapping (SLAM) based applications are other important areas that require fact and welldistributed AIM for robust vision navigation solutions. Different robust feature detection methods are commonly used for AIM, such as Scale Invariant Feature Transform (SIFT), Principal Component Analysis (PCA)–SIFT and Speeded Up Robust Features (SURF). The performance of such techniques have been widely investigated and compared, showing high capability to provide reliable and precise results. However, these techniques are still limited to be used for real and nearly real time SLAM based applications, such as intelligent Robots and low-cost Unmanned Aircraft Vehicles (UAV) based on vision navigation. The main limitations of these AIM techniques are represented in the relatively long processing time and the random distribution of matched points over the common area between images. This paper works on overcoming these two limitations, providing extremely fast AIM with well- distributed common points for robust real time vision navigation. Digital image pyramid, Epipolar line and 2D transformation have been utilized for limiting the size of search windows significantly and determining the rotating angle and scale level of features, reducing the overall processing time considerably. Using limited number of well-distributed common points has also helped to speed up the automatic matching besides providing robust vision navigation solution. The idea has been tested with terrestrial MMS images, and surveying UAV aerial images. The results reflect the high capability of the followed technique in providing fast and robust AIM for real-time SLAM based applications. Keywords: Automatic Image Matching, Epipolar Line, Image Pyramid, SLAM, Vision Navigation, Real Time, Vision Navigation.
APA, Harvard, Vancouver, ISO, and other styles
20

Tu, Da Wei, Xu Zhang, Kai Fei, and Xi Zhang. "Laser Stereo Vision-Based Position and Attitude Detection of Non-Cooperative Target for Space Application." Advanced Materials Research 1039 (October 2014): 242–50. http://dx.doi.org/10.4028/www.scientific.net/amr.1039.242.

Full text
Abstract:
Vision measurement for non-cooperative targets in space is an essential technique in space counterwork, fragment disposal, satellite on-orbit service, spacecraft rendezvous, because the position and attitude of the target aircraft or the object should be detected first of all in the process. The 2D passive camera loses the depth information and can not measure the position and attitude of non-cooperative target. Several kinds of range imaging methods are alternatives. The traditional triangulation method can provide very high precision range measurement at close range but the nature of the triangulation geometry means that the uncertainty grows when the range increases. Laser radar (LIDAR) based on TOF (time of flight) or phase difference principle is suitable for middle and long range, but it can not fit for short range. A novel structure system is put forward, in which a so-called synchronous scanning triangulation method is adopted combining a LIDAR system. The synchronous scanning triangulation system plays a role at the range from 0.5m to 10m for object’s attitude, and the LIDAR system plays a role at the range from 10m to 200m for object’s position (direction and range).They are merged into one path, and do not influence each other because of using two different wavelengths respectively. This mechanism makes the system more compact and less weight. The system performances, such as the measurement range and precision, are analyzed according to the system parameters. The principle prototype is designed and established, and the experimental results confirm that its performance is promising and can satisfy the requirement for space application.
APA, Harvard, Vancouver, ISO, and other styles
21

Santana, Lucas Santos, Gabriel Araújo e. Silva Ferraz, Gabriel Henrique Ribeiro dos Santos, Nicole Lopes Bento, and Rafael de Oliveira Faria. "Identification and Counting of Coffee Trees Based on Convolutional Neural Network Applied to RGB Images Obtained by RPA." Sustainability 15, no. 1 (January 2, 2023): 820. http://dx.doi.org/10.3390/su15010820.

Full text
Abstract:
Computer vision algorithms for counting plants are an indispensable alternative in managing coffee growing. This research aimed to develop an algorithm for automatic counting of coffee plants and to determine the best age to carry out monitoring of plants using remotely piloted aircraft (RPA) images. This algorithm was based on a convolutional neural network (CNN) system and Open Source Computer Vision Library (OpenCV). The analyses were carried out in coffee-growing areas at the development stages three, six, and twelve months after planting. After obtaining images, the dataset was organized and inserted into a You Only Look Once (YOLOv3) neural network. The training stage was undertaken using 7458 plants aged three, six, and twelve months, reaching stability in the iterations between 3000 and 4000 it. Plant detection within twelve months was not possible due to crown unification. A counting accuracy of 86.5% was achieved with plants at three months of development. The plants’ characteristics at this age may have influenced the reduction in accuracy, and the low uniformity of the canopy may have made it challenging for the neural network to define a pattern. In plantations with six months of development, 96.8% accuracy was obtained for counting plants automatically. This analysis enables the development of an algorithm for automated counting of coffee plants using RGB images obtained by remotely piloted aircraft and machine learning applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Apud Baca, Javier Gibran, Thomas Jantos, Mario Theuermann, Mohamed Amin Hamdad, Jan Steinbrener, Stephan Weiss, Alexander Almer, and Roland Perko. "Automated Data Annotation for 6-DoF AI-Based Navigation Algorithm Development." Journal of Imaging 7, no. 11 (November 10, 2021): 236. http://dx.doi.org/10.3390/jimaging7110236.

Full text
Abstract:
Accurately estimating the six degree of freedom (6-DoF) pose of objects in images is essential for a variety of applications such as robotics, autonomous driving, and autonomous, AI, and vision-based navigation for unmanned aircraft systems (UAS). Developing such algorithms requires large datasets; however, generating those is tedious as it requires annotating the 6-DoF relative pose of each object of interest present in the image w.r.t. to the camera. Therefore, this work presents a novel approach that automates the data acquisition and annotation process and thus minimizes the annotation effort to the duration of the recording. To maximize the quality of the resulting annotations, we employ an optimization-based approach for determining the extrinsic calibration parameters of the camera. Our approach can handle multiple objects in the scene, automatically providing ground-truth labeling for each object and taking into account occlusion effects between different objects. Moreover, our approach can not only be used to generate data for 6-DoF pose estimation and corresponding 3D-models but can be also extended to automatic dataset generation for object detection, instance segmentation, or volume estimation for any kind of object.
APA, Harvard, Vancouver, ISO, and other styles
23

Shabelnik, Tetyana, Serhii Krivenko, and Olena Koneva. "AUTOMATIC PILOT SYSTEM FOR UNMANNED OF AIRCRAFT IN THE ABSENCE OF RADIO COMMUNICATION." Cybersecurity: Education, Science, Technique 1, no. 9 (2020): 93–103. http://dx.doi.org/10.28925/2663-4023.2020.9.93103.

Full text
Abstract:
One of the most pressing problems of piloting unmanned aerial vehicles (UAV) in the absence of radio communication is considered in the article. Therefore, the aim of the article is to develop an algorithm and method of automatic piloting of UAV in terms of loss of radio control signal using the methods of technical vision. The most effective methods of tracking, identification and detection of landmarks are based on the comparison of reference information (database of known navigation objects) with the observation scene in real time.Working system of automatic piloting of UAVs in the conditions of loss of radio control signal or GPS-navigation developed. The hardware and software of the UAV provides full automatic control. The programming of the system consists of two stages: planning the flight task and calculating the trajectory of the UAV in flight. The planning of the flight task is carried out by setting the topographic landmarks and flight parameters in relation to them. At this stage, the criteria for the generalization of the various components of the landscape are formed and their division by gradations. This work is combined with the recognition of points with altitude marks, and fixing the heights of horizontal surfaces available in the area. All horizontal surfaces are tied with the shortest shooting strokes to at least of three points with elevations. The process of topography-based object selection is directly related to its segmentation, the results of which significantly affect the further process of image analysis and UAV control. The calibration of the starting point of the route occurs during the launch of the UAV. The control system automatically monitors the location of the UAV throughout the trajectory of the movement on a topographic basis relative to the prespecified landmarks. Structured shots of the terrain and topographic bases are compared during the flight. The algorithm is based on the comparison of geometric parameters of landmarks. The parameters of the geometric center O(x, y) and the area S are taken into account by such parameters. The control signal in the three axes OX, OY and OZ is determined for the first time by the method of least squares depending on the values ​​of the calculated coefficients of the original equations.
APA, Harvard, Vancouver, ISO, and other styles
24

Bemposta Rosende, Sergio, Javier Sánchez-Soriano, Carlos Quiterio Gómez Muñoz, and Javier Fernández Andrés. "Remote Management Architecture of UAV Fleets for Maintenance, Surveillance, and Security Tasks in Solar Power Plants." Energies 13, no. 21 (November 1, 2020): 5712. http://dx.doi.org/10.3390/en13215712.

Full text
Abstract:
This article presents a remote management architecture of an unmanned aerial vehicles (UAVs) fleet to aid in the management of solar power plants and object tracking. The proposed system is a competitive advantage for sola r energy production plants, due to the reduction in costs for maintenance, surveillance, and security tasks, especially in large solar farms. This new approach consists of creating a hardware and software architecture that allows for performing different tasks automatically, as well as remotely using fleets of UAVs. The entire system, composed of the aircraft, the servers, communication networks, and the processing center, as well as the interfaces for accessing the services via the web, has been designed for this specific purpose. Image processing and automated remote control of the UAV allow generating autonomous missions for the inspection of defects in solar panels, saving costs compared to traditional manual inspection. Another application of this architecture related to security is the detection and tracking of pedestrians and vehicles, both for road safety and for surveillance and security issues of solar plants. The novelty of this system with respect to current systems is summarized in that all the software and hardware elements that allow the inspection of solar panels, surveillance, and people counting, as well as traffic management tasks, have been defined and detailed. The modular system presented allows the exchange of different specific vision modules for each task to be carried out. Finally, unlike other systems, calibrated fixed cameras are used in addition to the cameras embedded in the drones of the fleet, which complement the system with vision algorithms based on deep learning for identification, surveillance, and inspection.
APA, Harvard, Vancouver, ISO, and other styles
25

Gabara, Grzegorz, and Piotr Sawicki. "Multi-Variant Accuracy Evaluation of UAV Imaging Surveys: A Case Study on Investment Area." Sensors 19, no. 23 (November 28, 2019): 5229. http://dx.doi.org/10.3390/s19235229.

Full text
Abstract:
The main focus of the presented study is a multi-variant accuracy assessment of a photogrammetric 2D and 3D data collection, whose accuracy meets the appropriate technical requirements, based on the block of 858 digital images (4.6 cm ground sample distance) acquired by Trimble® UX5 unmanned aircraft system equipped with Sony NEX-5T compact system camera. All 1418 well-defined ground control and check points were a posteriori measured applying Global Navigation Satellite Systems (GNSS) using the real-time network method. High accuracy of photogrammetric products was obtained by the computations performed according to the proposed methodology, which assumes multi-variant images processing and extended error analysis. The detection of blurred images was preprocessed applying Laplacian operator and Fourier transform implemented in Python using the Open Source Computer Vision library. The data collection was performed in Pix4Dmapper suite supported by additional software: in the bundle block adjustment (results verified using RealityCapure and PhotoScan applications), on the digital surface model (CloudCompare), and georeferenced orthomosaic in GeoTIFF format (AutoCAD Civil 3D). The study proved the high accuracy and significant statistical reliability of unmanned aerial vehicle (UAV) imaging 2D and 3D surveys. The accuracy fulfills Polish and US technical requirements of planimetric and vertical accuracy (root mean square error less than or equal to 0.10 m and 0.05 m).
APA, Harvard, Vancouver, ISO, and other styles
26

Dinh, Truong Duy, Rustam Pirmagomedov, Van Dai Pham, Aram A. Ahmed, Ruslan Kirichek, Ruslan Glushakov, and Andrei Vladyko. "Unmanned aerial system–assisted wilderness search and rescue mission." International Journal of Distributed Sensor Networks 15, no. 6 (June 2019): 155014771985071. http://dx.doi.org/10.1177/1550147719850719.

Full text
Abstract:
The success of the wilderness search and rescue missions is highly dependent on the time required to search for the lost person. The use of unmanned aerial systems may enhance search and rescue missions by supplying aerial support of the search process. There are unmanned aerial system–based solutions, which are capable of detecting the lost person using computer vision, infrared sensors, and detection of a mobile phone signal. The most pressing issue is reducing the cost of a search and rescue mission. Thus, to improve the efficiency of the resource utilization in wilderness search scenario, we consider the use of unmanned aerial system for both mobile phone detection and enabling Wi-Fi communication for the ground portion of the search and rescue team. Such an approach does not require specific additional tools (e.g. access point, specific user equipment) for communication, which reduces the cost and improves the scalability and coordination of the search and rescue mission. As a result, the article provides methods of searching the wilderness for a person using beacon signals from a mobile phone for two situations: when the distance to the source of emergency signals is unknown and when the distance is known. In addition, the voice transmission delay and the number of unmanned aircrafts are found to guaranty the quality of a call.
APA, Harvard, Vancouver, ISO, and other styles
27

Petrov, Yu V., V. N. Garmash, and D. M. Korobochkin. "THE DETECTION OF PRECIPITATION AND THE DETERMINATION OF THEIR INTENSITY LEVEL BASED ON IMAGES GENERATED BY THE VISIBLE RANGE CAMERA." Issues of radio electronics, no. 7 (July 20, 2018): 131–38. http://dx.doi.org/10.21778/2218-5453-2018-7-131-138.

Full text
Abstract:
This article provides a method for the automatic detection of precipitation and estimation of their intensity in the images generated by the camera in the visible range. With the ability to determine the rainfall intensity in one image and by reducing the number of operations of convolution and multiplication, characteristic of known similar methods, the proposed has a lower computational complexity and can be used on mobile platforms in real-time mode. The result is achieved by using such methods of digital image processing as the construction of a histogram of oriented gradients (HoG), digital filtering, discrete convolution and wavelet analysis. The main direction of the proposed method is implementation in enhanced and synthetic vision systems as a part of onboard radio-electronic complexes of aviation for the purpose to increase of situational awareness of crew at various stages of flight of the aircraft.
APA, Harvard, Vancouver, ISO, and other styles
28

Himabindu, Dakshayani D., and Praveen S. Kumar. "A Streamlined Attention Mechanism for Image Classification and Fine-Grained Visual Recognition." MENDEL 27, no. 2 (December 21, 2021): 59–67. http://dx.doi.org/10.13164/mendel.2021.2.059.

Full text
Abstract:
In the recent advancements attention mechanism in deep learning had played a vital role in proving better results in tasks under computer vision. There exists multiple kinds of works under attention mechanism which includes under image classification, fine-grained visual recognition, image captioning, video captioning, object detection and recognition tasks. Global and local attention are the two attention based mechanisms which helps in interpreting the attentive partial. Considering this criteria, there exists channel and spatial attention where in channel attention considers the most attentive channel among the produced block of channels and spatial attention considers which region among the space needs to be focused on. We have proposed a streamlined attention block module which helps in enhancing the feature based learning with less number of additional layers i.e., a GAP layer followed by a linear layer with an incorporation of second order pooling(GSoP) after every layer in the utilized encoder. This mechanism has produced better range dependencies by the conducted experimentation. We have experimented our model on CIFAR-10, CIFAR-100 and FGVC-Aircrafts datasets considering finegrained visual recognition. We were successful in achieving state-of-the-result for FGVC-Aircrafts with an accuracy of 97%.
APA, Harvard, Vancouver, ISO, and other styles
29

Tao, Jiagui, Qingmiao Chen, Jinxia Xu, Heng Zhao, and Siqi Song. "Utilization of Both Machine Vision and Robotics Technologies in Assisting Quality Inspection and Testing." Mathematical Problems in Engineering 2022 (March 16, 2022): 1–9. http://dx.doi.org/10.1155/2022/7547801.

Full text
Abstract:
Inspecting the smoothness and durability of the exterior and interior surfaces of manufactured products such as large passenger aircraft requires utilizing manufacturing precision instruments. Manual techniques are mainly adopted in inspections, which are, however, much costly and have low efficiency. Moreover, manual operation is more prone to missing and false inspections. To cope with these issues, effective precision instruments have been needed to devise intelligent inspections for impalpable damages as necessary tools. When many inspection tasks are investigated, very few public datasets are available to recognize the intelligent inspection of precision instruments. On the other hand, precision instruments can be applied to a wide variety of damages. YOLO V3 was proposed based on acquiring and processing the image appearances, which deal with the surface inspection of the precision instrument in this study. The image dataset of the impalpable damage was initially established. More specifically, the YOLO V3 detection network is leveraged to roughly calculate the location of the damage appearances and identify the damage type. Afterward, the designed level set algorithm was employed to obtain more accurate damage locations in the image block utilizing the characteristics of different types of damages. Finally, a quantitative analysis was performed employing the refined detection results. A deep architecture that can intelligently conduct damage detection was proposed. Besides, the proposed method exhibited a strong tolerance for unknown types of damages and excellent flexibility and adaptability. Extensive experimental results demonstrated that the proposed method can alleviate the shortcomings of the traditional inspection methods. Thus, it provided technical guidance for the applications of the intelligent orbital inspection of the robots. The proposed method with a more precise and noninvasive inspection technique remarkably accelerates the inspection time of impalpable damages on surfaces.
APA, Harvard, Vancouver, ISO, and other styles
30

Vrochidou, Eleni, George K. Sidiropoulos, Athanasios G. Ouzounis, Anastasia Lampoglou, Ioannis Tsimperidis, George A. Papakostas, Ilias T. Sarafis, Vassilis Kalpakis, and Andreas Stamkos. "Towards Robotic Marble Resin Application: Crack Detection on Marble Using Deep Learning." Electronics 11, no. 20 (October 12, 2022): 3289. http://dx.doi.org/10.3390/electronics11203289.

Full text
Abstract:
Cracks can occur on different surfaces such as buildings, roads, aircrafts, etc. The manual inspection of cracks is time-consuming and prone to human error. Machine vision has been used for decades to detect defects in materials in production lines. However, the detection or segmentation of cracks on a randomly textured surface, such as marble, has not been sufficiently investigated. This work provides an up-to-date systematic and exhaustive study on marble crack segmentation with color images based on deep learning (DL) techniques. The authors conducted a performance evaluation of 112 DL segmentation models with red–green–blue (RGB) marble slab images using five-fold cross-validation, providing consistent evaluation metrics in terms of Intersection over Union (IoU), precision, recall and F1 score to identify the segmentation challenges related to marble cracks’ physiology. Comparative results reveal the FPN model as the most efficient architecture, scoring 71.35% mean IoU, and SE-ResNet as the most effective feature extraction network family. The results indicate the importance of selecting the appropriate Loss function and backbone network, underline the challenges related to the marble crack segmentation problem, and pose an important step towards the robotic automation of crack segmentation and simultaneous resin application to heal cracks in marble-processing plants.
APA, Harvard, Vancouver, ISO, and other styles
31

Krisanski, Sean, Mohammad Sadegh Taskhiri, James Montgomery, and Paul Turner. "Design and Testing of a Novel Unoccupied Aircraft System for the Collection of Forest Canopy Samples." Forests 13, no. 2 (January 20, 2022): 153. http://dx.doi.org/10.3390/f13020153.

Full text
Abstract:
Unoccupied Aircraft Systems (UAS) are beginning to replace conventional forest plot mensuration through their use as low-cost and powerful remote sensing tools for monitoring growth, estimating biomass, evaluating carbon stocks and detecting weeds; however, physical samples remain mostly collected through time-consuming, expensive and potentially dangerous conventional techniques. Such conventional techniques include the use of arborists to climb the trees to retrieve samples, shooting branches with firearms from the ground, canopy cranes or the use of pole-mounted saws to access lower branches. UAS hold much potential to improve the safety, efficiency, and reduce the cost of acquiring canopy samples. In this work, we describe and demonstrate four iterations of 3D printed canopy sampling UAS. This work includes detailed explanations of designs and how each iteration informed the design decisions in the subsequent iteration. The fourth iteration of the aircraft was tested for the collection of 30 canopy samples from three tree species: eucalyptus pulchella, eucalyptus globulus and acacia dealbata trees. The collection times ranged from 1 min and 23 s, up to 3 min and 41 s for more distant and challenging to capture samples. A vision for the next iteration of this design is also provided. Future work may explore the integration of advanced remote sensing techniques with UAS-based canopy sampling to progress towards a fully-automated and holistic forest information capture system.
APA, Harvard, Vancouver, ISO, and other styles
32

Vicens-Miquel, Marina, F. Antonio Medrano, Philippe E. Tissot, Hamid Kamangir, Michael J. Starek, and Katie Colburn. "A Deep Learning Based Method to Delineate the Wet/Dry Shoreline and Compute Its Elevation Using High-Resolution UAS Imagery." Remote Sensing 14, no. 23 (November 26, 2022): 5990. http://dx.doi.org/10.3390/rs14235990.

Full text
Abstract:
Automatically detecting the wet/dry shoreline from remote sensing imagery has many benefits for beach management in coastal areas by enabling managers to take measures to protect wildlife during high water events. This paper proposes the use of a modified HED (Holistically-Nested Edge Detection) architecture to create a model for automatic feature identification of the wet/dry shoreline and to compute its elevation from the associated DSM (Digital Surface Model). The model is generalizable to several beaches in Texas and Florida. The data from the multiple beaches was collected using UAS (Uncrewed Aircraft Systems). UAS allow for the collection of high-resolution imagery and the creation of the DSMs that are essential for computing the elevations of the wet/dry shorelines. Another advantage of using UAS is the flexibility to choose locations and metocean conditions, allowing to collect a varied dataset necessary to calibrate a general model. To evaluate the performance and the generalization of the AI model, we trained the model on data from eight flights over four locations, tested it on the data from a ninth flight, and repeated it for all possible combinations. The AP and F1-Scores obtained show the success of the model’s prediction for the majority of cases, but the limitations of a pure computer vision assessment are discussed in the context of this coastal application. The method was also assessed more directly, where the average elevations of the labeled and AI predicted wet/dry shorelines were compared. The absolute differences between the two elevations were, on average, 2.1 cm, while the absolute difference of the elevations’ standard deviations for each wet/dry shoreline was 2.2 cm. The proposed method results in a generalizable model able to delineate the wet/dry shoreline in beach imagery for multiple flights at several locations in Texas and Florida and for a range of metocean conditions.
APA, Harvard, Vancouver, ISO, and other styles
33

Chowhan, Kishan S., Hemendra Arya, Vijay V. Patel, and Girish S. Deodhare. "Real-Time Vision-Based Aircraft Vertical Tail Damage Detection and Parameter Estimation." IETE Journal of Research, February 23, 2022, 1–12. http://dx.doi.org/10.1080/03772063.2022.2038703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Farhadmanesh, Mohammad, Nikola Marković, and Abbas Rashidi. "Automated Video-Based Air Traffic Surveillance System for Counting General Aviation Aircraft Operations at Non-Towered Airports." Transportation Research Record: Journal of the Transportation Research Board, August 21, 2022, 036119812211150. http://dx.doi.org/10.1177/03611981221115087.

Full text
Abstract:
The vast majority of U.S. airports are not equipped with control towers, which limits their ability to keep records of flight operations. Attempts have been made to use sensor-based technologies to count aircraft operations at non-towered airports; however, they exhibit limited accuracy. To this end, we developed an automated video-based surveillance system capable of detecting general aviation aircraft departure and landing operations, which comprise the vast majority of operations at non-towered airports. The proposed computer vision method is comprised of three modules: aircraft detection, aircraft tracking, and operations count and classification. We explored different camera layouts and state-of-the-art machine learning and deep learning methods to determine the best settings to extract operations trajectory features for operations count and classification. The proposed method was tested at five non-towered airports. Integrating deep-neural-network-based aircraft detectors and image-correlation-based aircraft trackers achieved an accuracy of about 95%, while ensuring processing times that are needed for real-time implementation.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhao, Qijie, Yaohui Kong, Shaojie Sheng, and Junjun Zhu. "Redundant Objects Detection Method for Civil Aircraft Assembly Based on Machine Vision and Smart Glasses." Measurement Science and Technology, June 28, 2022. http://dx.doi.org/10.1088/1361-6501/ac7cbd.

Full text
Abstract:
Abstract Aiming at the problems of slow detection speed and low accuracy of redundant objects in the assembly process of civil aircraft, a redundant object detection method based on vision and augmented reality smart glasses is proposed. This method uses smart glasses as the hardware device, and takes the image collected by the camera as the input of the detection system, and proposes the FPN-CenterNet machine vision model based on the CenterNet detection head to detect redundant objects. The multi-scale feature fusion method is used to solve the problem of low detection accuracy of small-scale redundant targets, and a loss function with dynamic weights is designed to solve the problem of unbalanced proportion of large and small targets in training samples. The proposed network model is validated on the PASCAL VOC public dataset and the self-built redundant object dataset. This method can detect a large number of redundant objects within the visible range of smart glasses within 200ms. The research results provide a new reference for the quality process management of civil aircraft assembly.
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Bochan, Vishnu Saj, Dileep Kalathil, and Moble Benedict. "Intelligent Vision-based Autonomous Ship Landing of VTOL UAVs." Journal of the American Helicopter Society, 2022. http://dx.doi.org/10.4050/jahs.68.022010.

Full text
Abstract:
The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs) on ships without utilizing GPS signals. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, it refers to a standardized visual cue installed on most Navy ships called the “horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy.
APA, Harvard, Vancouver, ISO, and other styles
37

Barros, Francisco, Susana Aguiar, Pedro J. Sousa, António Cachaço, Nuno V. Ramos, Paulo Tavares, P. M. G. Moreira, Luís Oliveira Santos, Min Xu, and Elsa Franco. "Detection and measurement of beam deflection in the Madeira Airport runway extension using digital image correlation." International Journal of Structural Integrity, November 17, 2022. http://dx.doi.org/10.1108/ijsi-03-2022-0049.

Full text
Abstract:
PurposePart of the runway at Madeira Airport is a platform above the sea at a 60 m height, supported by a series of frames. When aircraft land on this section, a load is exerted on the structure, resulting in bending of the beams which constitute the frames. A vision-based monitoring system was devised and implemented to measure the deflection of the runway's beams when a landing occurs.Design/methodology/approachAn area on the midspan of two beams, located on the area where aircraft are most likely to land, was prepared with a speckle pattern, and a camera was assembled above a column on each of the adjacent frames, enabling the computation of displacements using digital image correlation (DIC). The camera continuously acquires images of the monitored area and compares them to a reference using DIC. If a displacement is detected, a number of frames before and after this event are saved for further DIC processing.FindingsThe installed systems successfully detected several events corresponding to landings and, for each of those events, measured the deflection of the beams over time and computed displacement fields for critical images, with strain values obtained up to this point being too small to measure using the current system.Originality/valueThis work provides novel insights into the behaviour of a unique structure and constitutes the first use of a vision system in its structural monitoring operations. It is also a valuable development in the implementation of automated DIC monitoring systems in locations of difficult access.
APA, Harvard, Vancouver, ISO, and other styles
38

Melching, David, Tobias Strohmann, Guillermo Requena, and Eric Breitbarth. "Explainable machine learning for precise fatigue crack tip detection." Scientific Reports 12, no. 1 (June 9, 2022). http://dx.doi.org/10.1038/s41598-022-13275-1.

Full text
Abstract:
AbstractData-driven models based on deep learning have led to tremendous breakthroughs in classical computer vision tasks and have recently made their way into natural sciences. However, the absence of domain knowledge in their inherent design significantly hinders the understanding and acceptance of these models. Nevertheless, explainability is crucial to justify the use of deep learning tools in safety-relevant applications such as aircraft component design, service and inspection. In this work, we train convolutional neural networks for crack tip detection in fatigue crack growth experiments using full-field displacement data obtained by digital image correlation. For this, we introduce the novel architecture ParallelNets—a network which combines segmentation and regression of the crack tip coordinates—and compare it with a classical U-Net-based architecture. Aiming for explainability, we use the Grad-CAM interpretability method to visualize the neural attention of several models. Attention heatmaps show that ParallelNets is able to focus on physically relevant areas like the crack tip field, which explains its superior performance in terms of accuracy, robustness, and stability.
APA, Harvard, Vancouver, ISO, and other styles
39

Truong, Long Ngo Hoang, Edward Clay, Omar E. Mora, Wen Cheng, Mankirat Singh, and Xudong Jia. "Rotated Mask Region-Based Convolutional Neural Network Detection for Parking Space Management System." Transportation Research Record: Journal of the Transportation Research Board, July 26, 2022, 036119812211050. http://dx.doi.org/10.1177/03611981221105066.

Full text
Abstract:
Parking space management systems help organize and optimize available parking spaces for consumers, making the process of finding and using parking spaces more efficient. Current parking space management systems include manual recognition, the employment of magnetic and ultrasonic sensors, and, recently, computer vision (CV). One relatively new region-based convolutional neural network (R-CNN) model, Mask R-CNN, has shown promise in its ability to detect objects and has demonstrated superior performance over many other popular CV methods. Building on Mask R-CNN, an updated version, Rotated Mask R-CNN, which can generate bounding boxes the axes of which are rotated with respect to the image’s axis, was proposed to address the limitation of Mask R-CNN. Albeit with the documented theoretical benefits, the application of the rotated version is rare because of its recent invention. To this end, the study aims to detect vehicle instances in one parking lot using various Rotated Mask R-CNN models based on unmanned aircraft system collected images. Both average precision and average recall were utilized to assess the performance of the alternative models with different backbone and head networks. The results reveal the high accuracy level associated with Rotated Mask R-CNN in real-time detection of vehicles. In addition, the results indicate that the inference speed and total loss are highly correlated with head networks and training schedules.
APA, Harvard, Vancouver, ISO, and other styles
40

Purcell, Cormac R., Andrew J. Walsh, Andrew P. Colefax, and Paul Butcher. "Assessing the ability of deep learning techniques to perform real-time identification of shark species in live streaming video from drones." Frontiers in Marine Science 9 (October 20, 2022). http://dx.doi.org/10.3389/fmars.2022.981897.

Full text
Abstract:
Over the last five years remotely piloted drones have become the tool of choice to spot potentially dangerous sharks in New South Wales, Australia. They have proven to be a more effective, accessible and cheaper solution compared to crewed aircraft. However, the ability to reliably detect and identify marine fauna is closely tied to pilot skill, experience and level of fatigue. Modern computer vision technology offers the possibility of improving detection reliability and even automating the surveillance process in the future. In this work we investigate the ability of commodity deep learning algorithms to detect marine objects in video footage from drones, with a focus on distinguishing between shark species. This study was enabled by the large archive of video footage gathered during the NSW Department of Primary Industries Drone Trials since 2016. We used this data to train two neural networks, based on the ResNet-50 and MobileNet V1 architectures, to detect and identify ten classes of marine object in 1080p resolution video footage. Both networks are capable of reliably detecting dangerous sharks: 80% accuracy for RetinaNet-50 and 78% for MobileNet V1 when tested on a challenging external dataset, which compares well to human observers. The object detection models correctly detect and localise most objects, produce few false-positive detections and can successfully distinguish between species of marine fauna in good conditions. We find that shallower network architectures, like MobileNet V1, tend to perform slightly worse on smaller objects, so care is needed when selecting a network to match deployment needs. We show that inherent biases in the training set have the largest effect on reliability. Some of these biases can be mitigated by pre-processing the data prior to training, however, this requires a large store of high resolution images that supports augmentation. A key finding is that models need to be carefully tuned for new locations and water conditions. Finally, we built an Android mobile application to run inference on real-time streaming video and demonstrated a working prototype during fields trials run in partnership with Surf Life Saving NSW.
APA, Harvard, Vancouver, ISO, and other styles
41

"Development of algorithmic support for detecting an aircraft carrier using machine vision technology for automatic landing of UAV." Automation. Modern Techologies, 2022, 539–50. http://dx.doi.org/10.36652/0869-4931-2022-76-12-545-550.

Full text
Abstract:
The problem of detecting an aircraft carrier to measure the position and orientation of an unmanned aerial vehicle (UAV) is considered. An algorithm for selecting ROI regions by using motion detection and significance analysis via the spectral residual method is proposed. Based on the BoW model, a classification algorithm is constructed that combines SIFT feature extraction, K-means clustering, and the SVM classifier. The test results confirmed the high performance and adequate accuracy of aircraft carrier detection by using the proposed set of algorithms. Keywords aircraft carrier landing, UAV, object detection, motion detection, spectral residual method, SIFT feature, K-means clustering, support vector method
APA, Harvard, Vancouver, ISO, and other styles
42

Ahmed, M., Ali Maher, and X. Bai. "Aircraft tracking in aerial videos based on fused RetinaNet and low‐score detection classification." IET Image Processing, October 18, 2022. http://dx.doi.org/10.1049/ipr2.12665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Pakfiliz, Ahmet Güngör. "A New Method for Surface-to-Air Video Detection and Tracking of Airborne Vehicles." International Journal of Pattern Recognition and Artificial Intelligence 36, no. 01 (January 2022). http://dx.doi.org/10.1142/s0218001422550011.

Full text
Abstract:
Detection process of airborne targets may be thought simple because of the incompatible nature of aircraft, choppers, UAVs, and drones regarding clear sky background. When changes in the background are considered, brightness variation of the sky complicates the process. Changes in the shapes and types of clouds add another challenge to the process. Tracking process directly depends on the detection process and type of the data stream. The practical systems used for video detection and tracking of airborne targets are manual, and manual structures have some drawbacks compared to automatic structures. For video surveillance, guidance, regional security, and defense applications in dense environments, automatic detection and tracking process may be an obligation rather than preference. In this study, an automatic detection and tracking algorithm for video streams of airborne targets is proposed. A land-based moving camera captures the video data, and not only the flying objects but probably also the camera are in motion. Although the detection and tracking of moving objects via moving sensors is a relatively arduous task, this is the prevalent case in real-life scenarios. Video detection and tracking systems have one or more moving video sensors, while one or more flying air vehicles are in operation area. The proposed algorithm includes an image processing stage for detection and a tracking stage for initiation and continuation. An assessment study has been conducted for the actual video data and found that the proposed method yields successful results for detection, track formation, and continuation processes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography