To see the other types of publications on this topic, follow the link: You only look once v8.

Journal articles on the topic 'You only look once v8'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'You only look once v8.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Huangfu, Zhongmin, and Shuqing Li. "Lightweight You Only Look Once v8: An Upgraded You Only Look Once v8 Algorithm for Small Object Identification in Unmanned Aerial Vehicle Images." Applied Sciences 13, no. 22 (2023): 12369. http://dx.doi.org/10.3390/app132212369.

Full text
Abstract:
In order to solve the problems of high leakage rate, high false detection rate, low detection success rate and large model volume of small targets in the traditional target detection algorithm for Unmanned Aerial Vehicle (UAV) aerial images, a lightweight You Only Look Once (YOLO) v8 algorithm model Lightweight (LW)-YOLO v8 is proposed. By increasing the channel attention mechanism Squeeze-and-Excitation (SE) module, this method can adaptively improves the model’s ability to extract features from small targets; at the same time, the lightweight convolution technology is introduced into the Conv module, where the ordinary convolution is replaced by the GSConv module, which can effectively reduce the model computational volume; on the basis of the GSConv module, a single aggregation module VoV-GSCSPC is designed to optimize the model structure in order to achieve a higher computational cost-effectiveness. The experimental results show that the LW-YOLO v8 model’s mAP@0.5 metrics on the VisDrone2019 dataset are more favorable than those on the YOLO v8n model, improving by 3.8 percentage points, and the computational amount is reduced to 7.2 GFLOPs. The LW-YOLO v8 model proposed in this work can effectively accomplish the task of detecting small targets in aerial images from UAV at a lower cost.
APA, Harvard, Vancouver, ISO, and other styles
2

Rizqi Basuki, Nurfadjri Akbar, and Hustinawaty Hustinawaty. "You only look once v8 for fish species identification." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 3 (2024): 3314. http://dx.doi.org/10.11591/ijai.v13.i3.pp3314-3321.

Full text
Abstract:
<p>This research aims to test the performance of you only look once (YOLOv8) in identifying fish species in Indonesian waters. Fish image data is obtained from various sources to conduct testing. The results show that YOLOv8 is able to identify fish species with a mAP accuracy rate of 97%. These results reveal the great potential of deep learning technology in supporting the preservation of marine biodiversity as well as the development of various applications, such as fisheries monitoring, conservation, and marine-based tourism development in Indonesia. With its efficient object detection and classification capabilities, YOLOv8 can simplify and accelerate the process of identifying fish species, even on a large scale. Thus, this technology is a highly effective solution to overcome the challenges of manual fish species identification, which requires a lot of time and effort. The results of this study provide valuable insights into the potential utilization of Indonesia's natural resources in the context of scientific development, the tourism industry, and the fisheries sector, which is vital to the country's economy.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Nurfadjri, Akbar Rizqi Basuki, and Hustinawaty Hustinawaty. "You only look once v8 for fish species identification." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 3 (2024): 3314–21. https://doi.org/10.11591/ijai.v13.i3.pp3314-3321.

Full text
Abstract:
This research aims to test the performance of you only look once (YOLOv8) in identifying fish species in Indonesian waters. Fish image data is obtained from various sources to conduct testing. The results show that YOLOv8 is able to identify fish species with a mAP accuracy rate of 97%. These results reveal the great potential of deep learning technology in supporting the preservation of marine biodiversity as well as the development of various applications, such as fisheries monitoring, conservation, and marine-based tourism development in Indonesia. With its efficient object detection and classification capabilities, YOLOv8 can simplify and accelerate the process of identifying fish species, even on a large scale. Thus, this technology is a highly effective solution to overcome the challenges of manual fish species identification, which requires a lot of time and effort. The results of this study provide valuable insights into the potential utilization of Indonesia's natural resources in the context of scientific development, the tourism industry, and the fisheries sector, which is vital to the country's economy.
APA, Harvard, Vancouver, ISO, and other styles
4

Arifadilah, Daffa, Asriyanik, and Agung Pambudi. "Sunda Script Detection Using You Only Look Once Algorithm." Journal of Artificial Intelligence and Engineering Applications (JAIEA) 3, no. 2 (2024): 606–13. http://dx.doi.org/10.59934/jaiea.v3i2.443.

Full text
Abstract:
The Sundanese script is a writing system used in the Sundanese language, one of the regional languages of West Java, Indonesia. This study investigates the use of the YOLO v8 algorithm for the real-time video detection of Sundanese script. Various versions of YOLO v8, including YOLO v8n, v8s, v8m, v8l, and v8x, were tested to determine the most effective model. After a comprehensive evaluation involving the analysis of mean Average Precision (mAP), F1-Confidence, and precision, the study selected the YOLO v8s model as the primary detection method. YOLO v8s demonstrated superior performance with the highest mAP of 98.835%, an F1-Confidence of 98%, and a precision of 76,2%. This choice was based on a balance between high accuracy and computational efficiency. The results indicate significant potential for object recognition technology in the learning and preservation of Sundanese script.
APA, Harvard, Vancouver, ISO, and other styles
5

Priandini, Jesita Reinandra. "Pengenalan Rambu Lalu Lintas Menggunakan Model You Only Look Once (YOLO) V8." Jurnal Rekayasa Sistem Informasi dan Teknologi 2, no. 2 (2024): 801–9. https://doi.org/10.70248/jrsit.v2i2.1607.

Full text
Abstract:
Mobil autonomous adalah kendaraan yang memiliki kemampuan untuk berjalan secara mandiri tanpa bantuan manusia. Walau bagaimanapun, mobil ini memiliki masalah dalam mendeteksi rambu lalu lintas. Pengenal rambu lalu lintas dirancang untuk membuat mobil autonomous lebih aman karena mereka dapat mengenali rambu lalu lintas yang dilewati. Metode ini menggunakan model YOLOv8, pengembangan dari metode Convolutional Neural Network, untuk mendeteksi dan mengklasikasi rambu lalu lintas. Model ini dipilih karena sangat efisiensi dan akurat. Dataset Roboflow yang berisi 2390 gambar dari 17 jenis rambu lalu lintas Indonesia digunakan dalam penelitian ini. Dengan nilai akurasi sebesar 97,90%, nilai ketepatan sebesar 0,978, nilai recall sebesar 0,989, nilai MAP50 sebesar 0.987, dan nilai MAP50- 95 sebesar 0,825, penelitian ini menunjukkan bahwa model ini bekerja dengan sangat baik. Nilai ini menunjukkan bahwa model dapat dengan akurat menemukan dan mengklasifikasikan rambu lalu lintas.
APA, Harvard, Vancouver, ISO, and other styles
6

Hayati, Nurhaliza Juliyani, Dayan Singasatia, and Muhamad Rafi Muttaqin. "Object Tracking Menggunakan Algoritma You Only Look Once (YOLO)v8 untuk Menghitung Kendaraan." Komputa : Jurnal Ilmiah Komputer dan Informatika 12, no. 2 (2023): 91–99. http://dx.doi.org/10.34010/komputa.v12i2.10654.

Full text
Abstract:
Vehicles are a means of transportation that have existed from ancient times until now, many people use vehicles such as cars and motorbikes. Enumeration of types and numbers of vehicles is carried out to collect traffic data information. In obtaining data parameters for the number of vehicles, manual calculations are usually prone to errors and take a lot of time and energy. The application of Artificial Intelligence such as object detection is a field of computer vision. In intelligent transportation systems, traffic data is the key to conducting research and designing a system. To overcome the problem, researchers carried out object tracking using the You Only Look Once (YOLO) v8 algorithm to detect the type and count the number of vehicles. The methodology applied is the AI Project Cycle stages which use problem scoping, data acquisition, data exploration, modeling, and confusion matrix evaluation. The results of the confusion matrix evaluation obtained an accuracy level of 89%, precision of 89%, recall of 90% and a weighted comparison of precision and recall obtained an F1-Score value of 89%. Thus, the You Only Look Once (YOLO) v8 algorithm is accurate enough to detect object tracking to calculate vehicles.
APA, Harvard, Vancouver, ISO, and other styles
7

Akyas Hifdzi Rahman, Rifqi, Asril Adi Sunarto, and Asriyanik Asriyanik. "PENERAPAN YOU ONLY LOOK ONCE (YOLO) V8 UNTUK DETEKSI TINGKAT KEMATANGAN BUAH MANGGIS." JATI (Jurnal Mahasiswa Teknik Informatika) 8, no. 5 (2024): 10566–71. http://dx.doi.org/10.36040/jati.v8i5.10979.

Full text
Abstract:
Indonesia memiliki potensi besar dalam produksi buah-buahan tropis, salah satunya adalah manggis (Garcinia mangostana Linn) yang dikenal sebagai "ratu buah". Namun, proses klasifikasi kematangan manggis masih dilakukan secara manual, yang rentan terhadap kesalahan manusia. Penelitian ini bertujuan mengembangkan model deteksi kematangan buah manggis menggunakan Algoritma You Only Look Once (YOLO) untuk meningkatkan akurasi dan efisiensi penyortiran. Dengan menggunakan pendekatan CRISP-DM, data gambar manggis dikumpulkan dan diproses untuk dilabeli dan di-augmentasi. Hasil penelitian menunjukkan bahwa model YOLOv8s dengan optimizer SGD memberikan hasil terbaik dengan precision 0.997, recall 1, dan mAP50-95 sebesar 0.972. Implementasi model ini ke dalam sistem berbasis web menunjukkan potensi besar dalam menggantikan metode manual yang rentan terhadap kesalahan manusia. Model ini diharapkan dapat meningkatkan efisiensi dan akurasi dalam industri pertanian, khususnya untuk penyortiran buah manggis.
APA, Harvard, Vancouver, ISO, and other styles
8

Afiansyah, Rifan, Prajoko Prajoko, and Asriyanik Asriyanik. "PEMODELAN DETEKSI BELA DIRI BERBASIS WEB DENGAN ALGORITMA YOU ONLY LOOK ONCE V8." JATI (Jurnal Mahasiswa Teknik Informatika) 8, no. 5 (2024): 9970–77. http://dx.doi.org/10.36040/jati.v8i5.10879.

Full text
Abstract:
Seni bela diri merupakan aktivitas yang tidak hanya berfungsi sebagai metode pertahanan diri, tetapi juga memiliki manfaat positif seperti menjaga kesehatan, meningkatkan disiplin, dan mempromosikan nilai-nilai budaya. Dengan minat yang semakin meningkat terhadap teknologi deteksi gerakan bela diri untuk tujuan pelatihan dan edukasi, penelitian sebelumnya telah menggunakan berbagai metode seperti Convolutional Neural Network (CNN) untuk mendeteksi gerakan silat dengan akurasi 77%, serta Support Vector Machine dan YOLOv3 untuk klasifikasi pose dasar karate dengan hasil presisi, recall, dan F1 Score yang tinggi, meskipun masih terdapat tingkat kesalahan sebesar 66,66%. Namun, penelitian-penelitian tersebut umumnya terbatas pada deteksi satu jenis bela diri. Oleh karena itu, penelitian ini bertujuan untuk mengembangkan sistem deteksi gerakan bela diri berbasis web menggunakan metode YOLOv8, dengan fokus pada tiga jenis bela diri: karate, taekwondo, dan silat. Metode YOLO dipilih karena kemampuannya dalam mendeteksi objek secara real-time, dengan memprediksi kotak pembatas dan probabilitas kelas langsung pada satu gambar penuh dalam satu evaluasi. Model yang dikembangkan diharapkan dapat mengenali gerakan dengan tingkat akurasi minimal 90% dan mencakup 25 kelas gerakan yang meliputi ketiga jenis bela diri tersebut. Model ini dilatih secara real-time menggunakan data yang diolah dan di-augmentasi melalui Roboflow, serta menggunakan optimizer AdamW dengan learning rate 0.001. Pengujian dengan 50 epoch menunjukkan akurasi tinggi, dengan metrik precision, recall, dan F1 yang hampir sempurna. Model ini kemudian diimplementasikan dalam sebuah website sederhana yang memungkinkan pengguna mendeteksi gerakan bela diri secara interaktif, menunjukkan potensi besar dalam aplikasi praktis di masa depan. Penelitian ini diharapkan dapat mendorong pengembangan teknologi deteksi gerakan dalam seni bela diri dan memberikan kontribusi terhadap peningkatan kualitas latihan serta edukasi seni bela diri.
APA, Harvard, Vancouver, ISO, and other styles
9

Bayu Pangestu, Andhika, Muhamad Rafi Muttaqin, and Muhamad Agus Sunandar. "SISTEM DETEKSI BAHASA ISYARAT INDONESIA (BISINDO) MENGGUNAKAN ALGORITMA YOU ONLY LOOK ONCE (YOLO)v8." JATI (Jurnal Mahasiswa Teknik Informatika) 8, no. 5 (2024): 9891–97. http://dx.doi.org/10.36040/jati.v8i5.10833.

Full text
Abstract:
Bahasa Isyarat Indonesia (BISINDO) adalah alat komunikasi yang penting bagi penyandang tunarungu di Indonesia, namun banyak orang dengan kemampuan mendengar yang belum memahaminya. Untuk memfasilitasi komunikasi, penelitian ini merancang sistem deteksi BISINDO menggunakan algoritma YOLOv8. Algoritma YOLOv8 dilatih dengan dataset gambar dan vidio yang telah diklasifikasikan, dan sistem ini diimplementasikan menggunakan platform Streamlit untuk aksesibilitas yang mudah. Data digunakan untuk melatih dan menguji model dalam berbagai kondisi pencahayaan dan latar belakang. Hasil evaluasi menunjukkan nilai precision sebesar 0.958 (95.8%), recall sebesar 0.974 (97.4%), dan mAP50 mencapai 0.995 (99.5%). Sementara itu, nilai mAP50-90 adalah 0.884 (88.4%), dengan waktu pemrosesan selama 1 jam. Evaluasi menggunakan confusion matrix dan Mean Average Precision (mAP) menunjukkan bahwa model memiliki kinerja yang baik dalam mendeteksi objek. Implementasi ini efektif dalam mengatasi hambatan komunikasi antara penyandang tunarungu dan masyarakat umum, mendukung pembangunan inklusif di Indonesia.
APA, Harvard, Vancouver, ISO, and other styles
10

Ekhsanto, Bagus kurniawan, Bagus Adhi Kusuma, and Adam Prayogo Kuncoro. "IMPLEMENTATION OF YOU ONLY LOOK ONCE V8 ALGORITHM IN POTATO LEAF DISEASE DETECTION SYSTEM." Jurnal Teknik Informatika (Jutif) 5, no. 4 (2024): 125–32. https://doi.org/10.52436/1.jutif.2024.5.4.2104.

Full text
Abstract:
Agriculture is an important foundation of the national economy, as effective development in this sector will support overall economic stability. Potato itself is one of the world's staple foods after rice, wheat and corn. This crop belongs to the category of horticulture which is widely planted and developed by people to meet their needs. On the farm of Bibit sida kangen Kalibening, Banjarnegara which is one of the farms that grow potatoes has constraints related to potato diseases which result in decreased productivity of crops. Therefore, the main purpose of this system is to provide fast and accurate disease detection capability on the farm of Bibit sida kangen Kalibening, Banjarnegara, so that it can help farmers in reducing losses caused by disease attacks on plants. By utilizing YOU ONLY LOOK ONCE V8 (YOLOv8) technology, this system can recognize and classify potato leaf disease types, including early_blight, late_blight, and healthy plants, with a high level of accuracy. Through evaluation using precision and recall matrices, the results show a significant success rate, with precision accuracy for early_blight of 87%, healthy plants of 81%, and late_blight of 97%, respectively. Meanwhile, the recall results for the three categories also reached 87%, 81%, and 97% respectively. With an overall accuracy of 88%, these findings confirm that the developed detection system is successful in identifying potato leaf diseases with high accuracy. This indicates the great potential of this system in assisting farmers in managing the condition of their potato crops, which in turn can improve farmers' productivity and welfare.
APA, Harvard, Vancouver, ISO, and other styles
11

Purnomo, Niko, Windu Gata, Muhammad Romadhona Kusuma, Riadi Marta Dinata, and Modesta Binti Husna. "IMPLEMENTASI YOU ONLY LOOK ONCE v8 DALAM DETEKSI MAKANAN WARUNG TEGAL UNTUK SISTEM PERHITUNGAN HARGA OTOMATIS." JATI (Jurnal Mahasiswa Teknik Informatika) 9, no. 2 (2025): 3476–83. https://doi.org/10.36040/jati.v9i2.13465.

Full text
Abstract:
Warung Tegal (Warteg) merupakan usaha kuliner yang populer di Indonesia, tetapi sistem perhitungan harga makanannya masih manual, yang dapat menyebabkan kesalahan transaksi. Penelitian ini bertujuan mengembangkan sistem deteksi makanan otomatis menggunakan YOLO v8 untuk mengotomatisasi perhitungan harga.Dataset terdiri dari berbagai lauk warteg yang diproses dengan teknik augmentasi seperti pemotongan, rotasi, dan pencahayaan guna meningkatkan kinerja model. Hasil penelitian menunjukkan bahwa dalam pengujian terbaik dengan dataset 70:30 (20 epochs, batch size 16, learning rate 0.001), model YOLO v8 mencapai precision 0.602, recall 0.176, F1-score 0.32, box_loss 1.756, dan mAP@0.5 0.229. Tantangan utama meliputi keterbatasan dataset, kompleksitas latar belakang, dan kurangnya perbandingan dengan dataset publik. Meskipun dalam beberapa kondisi akurasi mencapai 75%- 100%, diperlukan dataset lebih besar dan perbandingan model lain untuk meningkatkan akurasi. Sistem ini berpotensi mendukung digitalisasi industri kuliner dan meningkatkan efisiensi transaksi di warteg.
APA, Harvard, Vancouver, ISO, and other styles
12

Michael, Goodnews, Essa Q. Shahra, Shadi Basurra, Wenyan Wu, and Waheb A. Jabbar. "Real-Time Pipeline Fault Detection in Water Distribution Networks Using You Only Look Once v8." Sensors 24, no. 21 (2024): 6982. http://dx.doi.org/10.3390/s24216982.

Full text
Abstract:
Detecting faulty pipelines in water management systems is crucial for ensuring a reliable supply of clean water. Traditional inspection methods are often time-consuming, costly, and prone to errors. This study introduces an AI-based model utilizing images to detect pipeline defects, focusing on leaks, cracks, and corrosion. The YOLOv8 model is employed for object detection due to its exceptional performance in detecting objects, segmentation, pose estimation, tracking, and classification. By training on a large dataset of labeled images, the model effectively learns to identify visual patterns associated with pipeline faults. Experiments conducted on a real-world dataset demonstrate that the AI-based model significantly outperforms traditional methods in detection accuracy. The model also exhibits robustness to various environmental conditions such as lighting changes, camera angles, and occlusions, ensuring reliable performance in diverse scenarios. The efficient processing time of the model enables real-time fault detection in large-scale water distribution networks implementing this AI-based model offers numerous advantages for water management systems. It reduces dependence on manual inspections, thereby saving costs and enhancing operational efficiency. Additionally, the model facilitates proactive maintenance through the early detection of faults, preventing water loss, contamination, and infrastructure damage. The results from the three conducted experiments indicate that the model from Experiment 1 achieves a commendable mAP50 of 90% in detecting faulty pipes, with an overall mAP50 of 74.7%. In contrast, the model from Experiment 3 exhibits superior overall performance, achieving a mAP50 of 76.1%. This research presents a promising approach to improving the reliability and sustainability of water management systems through AI-based fault detection using image analysis.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Hui, Yushuo Hou, Jicheng Zhang, Ping Zheng, and Shouyin Hou. "Research on Weed Reverse Detection Methods Based on Improved You Only Look Once (YOLO) v8: Preliminary Results." Agronomy 14, no. 8 (2024): 1667. http://dx.doi.org/10.3390/agronomy14081667.

Full text
Abstract:
The rapid and accurate detection of weeds is the prerequisite and foundation for precision weeding, automation, and intelligent field operations. Due to the wide variety of weeds in the field and their significant morphological differences, most existing detection methods can only recognize major crops and weeds, with a pressing need to enhance accuracy. This study introduces a novel weed detection approach that integrates the GFPN (Green Feature Pyramid Network), Slide Loss, and multi-SEAM (Spatial and Enhancement Attention Modules) to enhance accuracy and improve efficiency. This approach recognizes crop seedlings utilizing an improved YOLO v8 algorithm, followed by the reverse detection of weeds through graphics processing technology. The experimental results demonstrated that the improved YOLO v8 model achieved remarkable performance, with an accuracy of 92.9%, a recall rate of 87.0%, and an F1 score of 90%. The detection speed was approximately 22.47 ms per image. And when shooting from a height ranging from 80 cm to 100 cm in the field test, the crop detection effect was the best. This reverse weed detection method addresses the challenges posed by weed diversity and complexities in image recognition modeling, thereby contributing to the enhancement of automated and intelligent weeding efficiency and quality. It also provides valuable technical support for precision weeding in farmland operations.
APA, Harvard, Vancouver, ISO, and other styles
14

Eko Farhan, Arief, Prajoko Prajoko, and Agung Pambudi. "PENDETEKSIAN KANDUNGAN GULA DAN KARBOHIDRAT PADA UMBI-UMBIAN DENGAN METODE YOLO (YOU ONLY LOOK ONCE) v8." JATI (Jurnal Mahasiswa Teknik Informatika) 8, no. 5 (2024): 10043–50. http://dx.doi.org/10.36040/jati.v8i5.10891.

Full text
Abstract:
Di Indonesia, sebagian besar asupan makanan terdiri dari karbohidrat. Setelah dikonsumsi, karbohidrat tersebut dicerna oleh enzim di tubuh manusia dan diubah menjadi glukosa. Kadar gula darah memiliki peran penting dalam mempengaruhi fluktuasi tekanan darah, sehingga mengelola tekanan darah dan kadar gula sangat penting untuk meningkatkan kualitas hidup. Asupan gula yang berlebihan, terutama akibat pola makan yang kurang sehat, dapat menyebabkan diabetes melitus, yang banyak dialami oleh masyarakat, terutama lansia. Makanan tradisional berbasis umbi-umbian menawarkan potensi sebagai sumber karbohidrat yang sehat dan bisa menjadi pilihan yang baik untuk menjaga keseimbangan gula darah. Maka dari itu penelitian ini mengusulkan penerapan sistem deteksi kadar gula dan karbohidrat pada umbi-umbian menggunakan teknologi YOLO. Model YOLOv8 dikembangkan dengan menganalisis dataset berisi 1800 gambar, menghasilkan sistem deteksi umbi-umbian yang sangat akurat. Model ini menggunakan varian YOLOv8s dengan batch size 8 dan optimizer Adam dengan learning rate 0.001. Hasil uji menunjukkan kinerja yang sangat baik, dengan mAP50 (Mean Average Precision) sebesar 99.2%, Precision Confidence Curve 94.7%, Precision Recall 99.4%, dan F1 Confidence Curve 85.2%.
APA, Harvard, Vancouver, ISO, and other styles
15

Rismayanti, Azizah, and Reni Rahmadewi. "DETEKSI DAN KLASIFIKASI TINGKAT KEMATANGAN BUAH MANGGA HARUM MANIS MENGGUNAKAN YOU ONLY LOOK ONCE (YOLO) V8." JATI (Jurnal Mahasiswa Teknik Informatika) 9, no. 3 (2025): 3645–54. https://doi.org/10.36040/jati.v9i3.13320.

Full text
Abstract:
Mangga Harum Manis adalah varietas mangga dari Probolinggo, Jawa Timur, dengan ciri bentuk jorong, sedikit berparuh, dan ujung meruncing. Penyortiran tingkat kematangan mangga Harum Manis di Probolinggo, Jawa Timur, masih dilakukan secara manual dengan metode pengamatan visual, yang dapat menyebabkan inkonsistensi dalam penentuan kualitas. Ketidakseragaman ini berpotensi memengaruhi standar distribusi dan nilai jual buah. Oleh karena itu, penelitian ini mengembangkan sistem otomatis berbasis YOLOv8 untuk mendeteksi dan mengklasifikasikan tingkat kematangan mangga harum manis menjadi tiga kategori: Belum Matang, Setengah Matang, dan Matang. Dataset terdiri dari 540 citra yang dianotasi di Roboflow, dengan pembagian 87% pelatihan, 8% validasi, dan 4% pengujian. Model dilatih selama 50 epoch, batch 16, dan ukuran gambar 500. Hasil menunjukkan model mampu mendeteksi tingkat kematangan dengan akurasi berbeda di tiap kategori. Nilai mAP tertinggi sebesar 82,2% pada kategori Belum Matang, sementara Setengah Matang memiliki performa terendah dengan 76,6%. Secara keseluruhan, Precision model mencapai 81,1%, Recall 81,5%, dan mAP 79,8%. Hasil ini mengindikasikan bahwa YOLOv8 memiliki potensi besar dalam deteksi objek di lingkungan nyata, meskipun diperlukan peningkatan pada kategori tertentu untuk meningkatkan akurasi secara keseluruhan.
APA, Harvard, Vancouver, ISO, and other styles
16

Adi Permana, Arya, Muhammad Rafi Muttaqin, and Muhamad Agus Sunandar. "SISTEM DETEKSI API SECARA REAL TIME MENGGUNAKAN ALGORITMA YOU ONLY LOOK ONCE (YOLO) VERSI 8." JATI (Jurnal Mahasiswa Teknik Informatika) 8, no. 5 (2024): 10395–400. http://dx.doi.org/10.36040/jati.v8i5.10847.

Full text
Abstract:
Perkembangan teknologi yang pesat, terutama di bidang Artificial Intelligence (AI), telah membawa kemajuan dalam deteksi kebakaran. Data dari Badan Penanggulangan Bencana Provinsi Jawa Barat menunjukkan bahwa tahun 2019 hingga 2021 terdapat 607 kejadian kebakaran bangunan, dengan Kota Bandung mencatatkan 116 kejadian. Tingginya risiko kebakaran ini menunjukkan perlunya langkah preventif yang lebih efektif untuk melindungi masyarakat dan properti. Untuk mengatasi permasalahan tersebut, penelitian ini menggunakan algoritma You Only Look Once (YOLO)v8 untuk mendeteksi api di dalam ruangan. Metodologi yang diterapkan adalah CRISP-DM, mencakup tahapan Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, dan Deployment. Dengan dataset 2509 citra, model ini menunjukkan performa mengesankan dalam mendeteksi api dengan akurasi 93.5%, mAP50 sebesar 90%, dan mAP50-95 sebesar 66%. Pengujian menggunakan dua jenis input, yaitu live video menggunakan webcam dan file video. Hasil dari kedua metode input menunjukkan skor confidence mulai dari 0.50 hingga 0.90, yang mengindikasikan kemampuan model mendeteksi api dengan tingkat kepercayaan tinggi. Dengan hasil ini, diharapkan risiko kerusakan akibat api yang tidak terdeteksi dapat dikurangi dan efisiensi penanggulangan kebakaran dapat ditingkatkan, memberikan kontribusi signifikan terhadap keselamatan dan perlindungan properti.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Enze, Qibiao Wang, Jinzhao Zhang, Weihan Zhang, Hanlin Mo, and Yadong Wu. "Fish Detection under Occlusion Using Modified You Only Look Once v8 Integrating Real-Time Detection Transformer Features." Applied Sciences 13, no. 23 (2023): 12645. http://dx.doi.org/10.3390/app132312645.

Full text
Abstract:
Fish object detection has attracted significant attention because of the considerable role that fish play in human society and ecosystems and the necessity to gather more comprehensive fish data through underwater videos or images. However, fish detection has always faced difficulties with the occlusion problem because of dense populations and underwater plants that obscure them, and no perfect solution has been found until now. To address the occlusion issue in fish detection, the following effort was made: creating a dataset of occluded fishes, integrating the innovative modules in Real-time Detection Transformer (RT-DETR) into You Only Look Once v8 (YOLOv8), and applying repulsion loss. The results show that in the occlusion dataset, the mAP of the original YOLOv8 is 0.912, while the mAP of our modified YOLOv8 is 0.971. In addition, our modified YOLOv8 also has better performance than the original YOLOv8 in terms of loss curves, F1–Confidence curves, P–R curves, the mAP curve and the actual detection effects. All these indicate that our modified YOLOv8 is suitable for fish detection in occlusion scenes.
APA, Harvard, Vancouver, ISO, and other styles
18

Shakila, Rahman, Muhammad Hasnat Jamee Syed, Khan Rafi Jakaria, Sultana Juthi Jafrin, Abdul Aziz Sajib, and Uddin Jia. "Real-time smoke and fire detection using you only look once v8 based advanced computer vision and deep learning." International Journal of Advances in Applied Sciences (IJAAS) 13, no. 4 (2024): 987–99. https://doi.org/10.11591/ijaas.v13.i4.pp987-999.

Full text
Abstract:
Fire and smoke pose severe threats, causing damage to property and the environment and endangering lives. Traditional fire detection methods struggle with accuracy and speed, hindering real-time detection. Thus, this study introduces an improved fire and smoke detection approach utilizing the you only look once (YOLO)v8-based deep learning model. This work aims to enhance accuracy and speed, which are crucial for early fire detection. The methodology involves preprocessing a large dataset containing 5,700 images depicting fire and smoke scenarios. YOLOv8 has been trained and validated, outperforming some baseline models- YOLOv7, YOLOv5, ResNet-32, and MobileNet-v2 in the precision, recall, and mean average precision (mAP) metrics. The proposed method achieves 68.3% precision, 54.6% recall, 60.7% F1 score, and 57.3% mAP. Integrating YOLOv8 in fire and smoke detection systems can significantly improve response times, enhance the ability to mitigate fire outbreaks, and potentially save lives and property. This research advances fire detection systems and establishes a precedent for applying deep learning techniques to critical safety applications, pushing the boundaries of innovation in public safety.
APA, Harvard, Vancouver, ISO, and other styles
19

Bajpai, Manas. "YOLO Models for Security and Surveillance Applications." International Journal for Research in Applied Science and Engineering Technology 12, no. 6 (2024): 2513–18. http://dx.doi.org/10.22214/ijraset.2024.63521.

Full text
Abstract:
Abstract: Since 2015, the YOLO (You Only Look Once) series has evolved to YOLO-v8, prioritizing real-time processing and high accuracy for security and surveillance applications. Architectural enhancements in each iteration, culminating in YOLOv9, cater to rapid detection, precision, and adaptability to resource-constrained edge devices. This study examines YOLO’s evolution, emphasizing its relevance to security and surveillance contexts. Notable improvements in architecture, coupled with practical deployments for defect detection, underscore YOLO’s alignment with stringent security and surveillance requirements.
APA, Harvard, Vancouver, ISO, and other styles
20

Meena, Er M., and Dr G. Ramesh. "SMART BABY MONITORING SYSTEM USING YOLO V8 ALGORITHM." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 07 (2024): 1–16. http://dx.doi.org/10.55041/ijsrem36698.

Full text
Abstract:
The Smart Baby Monitoring System using the YOLO V8 algorithm is designed to enhance infant monitoring by leveraging advanced computer vision techniques. This project utilizes YOLO (You Only Look Once) version 8, a state-of-the-art object detection algorithm, implemented with Python and frameworks like Tensor Flow or PyTorch, to detect and track objects in real-time video feeds. The system incorporates features for facial recognition to identify known caregivers and alert mechanisms for unusual activities or emergencies. The user interface provides real-time alerts, visualizations, and historical data analysis for caregivers via a web or mobile application. By leveraging YOLO V8's efficiency in object detection and Python's capabilities for data processing and integration, this system aims to enhance safety, improve care giving efficiency, and provide peace of mind to parents and caregivers.
APA, Harvard, Vancouver, ISO, and other styles
21

Rahman, Shakila, Syed muhammad Hasnat Jamee, Jakaria Khan Rafi, Jafrin Sultana Juthi, Abdul Aziz Sajib, and Jia Uddin. "Real-time smoke and fire detection using you only look once v8-based advanced computer vision and deep learning." International Journal of Advances in Applied Sciences 13, no. 4 (2024): 987. http://dx.doi.org/10.11591/ijaas.v13.i4.pp987-999.

Full text
Abstract:
Fire and smoke pose severe threats, causing damage to property and the environment and endangering lives. Traditional fire detection methods struggle with accuracy and speed, hindering real-time detection. Thus, this study introduces an improved fire and smoke detection approach utilizing the you only look once (YOLO)v8-based deep learning model. This work aims to enhance accuracy and speed, which are crucial for early fire detection. The methodology involves preprocessing a large dataset containing 5,700 images depicting fire and smoke scenarios. YOLOv8 has been trained and validated, outperforming some baseline models- YOLOv7, YOLOv5, ResNet-32, and MobileNet-v2 in the precision, recall, and mean average precision (mAP) metrics. The proposed method achieves 68.3% precision, 54.6% recall, 60.7% F1 score, and 57.3% mAP. Integrating YOLOv8 in fire and smoke detection systems can significantly improve response times, enhance the ability to mitigate fire outbreaks, and potentially save lives and property. This research advances fire detection systems and establishes a precedent for applying deep learning techniques to critical safety applications, pushing the boundaries of innovation in public safety.
APA, Harvard, Vancouver, ISO, and other styles
22

Elesawy, Abdelrahman, Eslam Mohammed Abdelkader, and Hesham Osman. "A Detailed Comparative Analysis of You Only Look Once-Based Architectures for the Detection of Personal Protective Equipment on Construction Sites." Eng 5, no. 1 (2024): 347–66. http://dx.doi.org/10.3390/eng5010019.

Full text
Abstract:
For practitioners and researchers, construction safety is a major concern. The construction industry is among the world’s most dangerous industries, with a high number of accidents and fatalities. Workers in the construction industry are still exposed to safety risks even after conducting risk assessments. The use of personal protective equipment (PPE) is essential to help reduce the risks to laborers and engineers on construction sites. Developments in the field of computer vision and data analytics, especially using deep learning algorithms, have the potential to address this challenge in construction. This study developed several models to enhance the safety compliance of construction workers with respect to PPE. Through the utilization of convolutional neural networks (CNNs) and the application of transfer learning principles, this study builds upon the foundational YOLO-v5 and YOLO-v8 architectures. The resultant model excels in predicting six key categories: person, vest, and four helmet colors. The developed model is validated using a high-quality CHV benchmark dataset from the literature. The dataset is composed of 1330 images and manages to account for a real construction site background, different gestures, varied angles and distances, and multi-PPE. Consequently, the comparison among the ten models of YOLO-v5 (You Only Look Once) and five models of YOLO-v8 showed that YOLO-v5x6’s running speed in analysis was faster than that of YOLO-v5l; however, YOLO-v8m stands out for its higher precision and accuracy. Furthermore, YOLOv8m has the best mean average precision (mAP), with a score of 92.30%, and the best F1 score, at 0.89. Significantly, the attained mAP reflects a substantial 6.64% advancement over previous related research studies. Accordingly, the proposed research has the capability of reducing and preventing construction accidents that can result in death or serious injury.
APA, Harvard, Vancouver, ISO, and other styles
23

Affandi, Rikemaulani, and Budi Hartono. "Quadcopter v8: Kaji Pengolahan Citra untuk Misi Terbang Pendeteksian Keberadaan Manusia." Prosiding Industrial Research Workshop and National Seminar 14, no. 1 (2023): 567–72. http://dx.doi.org/10.35313/irwns.v14i1.5448.

Full text
Abstract:
Quadcopter memiliki bentuk yang kecil sehingga leluasa untuk bergerak di tempat yang sulit dan quadcopter dapat terbang secara vertikal, yang berarti tidak perlu landasan pacu untuk terbang. Kemajuan teknologi terus berkembang dengan cepat. Teknologi dalam mendeteksi suatu objek pada saat ini sangat populer mulai dari kebutuhan dalam mendeteksi objek seperti warna, wajah, sidik jari, dan sejenisnya menjadi awal pada pengembangan aplikasi citra digital yang lebih modern. Pada penelitian ini, telah dibuat quadcopter dengan sistem pengolahan citra menggunakan metode algoritma YOLO (You Only Look Once) yang dapat mengenali objek manusia. Quadcopter ini dapat terbang secara stabil dengan menggunakan flight controller Pixhawk Px4 yang telah diimplementasikan dengan sistem pendeteksi keberadaan manusia menggunakan Raspberry Pi, yang dapat terbang stabil pada ketinggian 2-6 meter.
APA, Harvard, Vancouver, ISO, and other styles
24

Jung, Do-Yoon, Yeon-Jae Oh, and Nam-Ho Kim. "A Study on GAN-Based Car Body Part Defect Detection Process and Comparative Analysis of YOLO v7 and YOLO v8 Object Detection Performance." Electronics 13, no. 13 (2024): 2598. http://dx.doi.org/10.3390/electronics13132598.

Full text
Abstract:
The main purpose of this study is to generate defect images of body parts using a GAN (generative adversarial network) and compare and analyze the performance of the YOLO (You Only Look Once) v7 and v8 object detection models. The goal is to accurately judge good and defective products. Quality control is very important in the automobile industry, and defects in body parts directly affect vehicle safety, so the development of highly accurate defect detection technology is essential. This study ensures data diversity by generating defect images of car body parts using a GAN and through this, compares and analyzes the object detection performance of the YOLO v7 and v8 models to present an optimal solution for detecting defects in car parts. Through experiments, the dataset was expanded by adding fake defect images generated by the GAN. The performance experiments of the YOLO v7 and v8 models based on the data obtained through this approach demonstrated that YOLO v8 effectively identifies objects even with a smaller amount of data. It was confirmed that defects could be detected. The readout of the detection system can be improved through software calibration.
APA, Harvard, Vancouver, ISO, and other styles
25

Serttaş, Esma, and Fatih Gül. "YOLO V8 Algoritması ile Otomatik Plaka Tanıma ve Görselleştirme Sistemi." Bilişim Teknolojileri Dergisi 18, no. 1 (2025): 1–10. https://doi.org/10.17671/gazibtd.1506041.

Full text
Abstract:
Bu çalışma ile, belirli bir mesafeye yerleştirilen bir kamera ile YOLO (You Only Look Once) V8 algoritmasını kullanarak aracın üzerindeki plakayı otomatik olarak tanıyan ve görselleştiren bir sistem tasarlanmıştır. YOLO V8, gelişmiş bilgisayarlı görü yeteneklerine sahip olmakla birlikte doğrudan plaka tanıma modeli içermemektedir. Bu çalışma ile güvenlik önlemleri gerektiren alanlarda insan gücünü ve maliyeti en aza indirerek verimli şekilde kullanılabilir bir model önerilmiştir. Plaka veri seti, bilgisayarlı görü modeli ortamı Roboflow kullanılarak oluşturulmuş ve yapay sinir ağı eğitim modeli geliştirilmiştir. Python programlama dili kullanılarak YOLO V8 algoritması ile yapay sinir ağı modeli Karayolları Trafik Yönetmeliğine uygun TR plakalar ile eğitilerek plaka tanıma işlemleri gerçekleştirilmiştir. Geliştirilen bu sistemde, açık kaynaklı kütüphaneler olan OpenCV, Time, Random, Numpy, Ultralytics ve EasyOCR kullanılmıştır. Kullanıcı arayüzü için Tkinter kullanılarak plaka tanıma sonuçları görselleştirilmiştir. Sistem tam karşıdan, sağ ve sol yönde 30° içerisinde kalacak şekilde farklı açılardan alınan görüntüler üzerinde test edilmiş ve yüksek doğruluk oranları (%99 @ 25 Epok) elde edilmiştir. Bu çalışma, trafik yönetimi, otopark sistemleri ve güvenlik uygulamaları gibi çeşitli alanlarda mevcut YOLOV8 tabanlı uygulamalara entegre edilebilir bir çözüm yöntemi önermektedir.
APA, Harvard, Vancouver, ISO, and other styles
26

Hussain, Muhammad. "YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection." Machines 11, no. 7 (2023): 677. http://dx.doi.org/10.3390/machines11070677.

Full text
Abstract:
Since its inception in 2015, the YOLO (You Only Look Once) variant of object detectors has rapidly grown, with the latest release of YOLO-v8 in January 2023. YOLO variants are underpinned by the principle of real-time and high-classification performance, based on limited but efficient computational parameters. This principle has been found within the DNA of all YOLO variants with increasing intensity, as the variants evolve addressing the requirements of automated quality inspection within the industrial surface defect detection domain, such as the need for fast detection, high accuracy, and deployment onto constrained edge devices. This paper is the first to provide an in-depth review of the YOLO evolution from the original YOLO to the recent release (YOLO-v8) from the perspective of industrial manufacturing. The review explores the key architectural advancements proposed at each iteration, followed by examples of industrial deployment for surface defect detection endorsing its compatibility with industrial requirements.
APA, Harvard, Vancouver, ISO, and other styles
27

Noer Fadilah, Rina Putri, Rasmi Rikmasari, Saiful Akbar, and Arlette Suzy Setiawan. "IDCCD: evaluation of deep learning for early detection caries based on ICDAS." Indonesian Journal of Electrical Engineering and Computer Science 38, no. 1 (2025): 381. https://doi.org/10.11591/ijeecs.v38.i1.pp381-392.

Full text
Abstract:
Dental caries is a common oral disease in children, influenced by environmental, psychological, behavioral, and biological factors. The American academy of pediatric dentistry recommends screening from the time the first tooth erupts or at one year of age to prevent caries, which mostly affects children from racial and ethnic minorities. In Indonesia, the 2023 health survey reported a caries prevalence of 84.8% in children aged 5-9 years. This research introduces early caries detection using three deep learning models: faster-RCNN, you only look once (YOLO) V8, and detection transformer (DETR), using Indonesian dental caries characteristic datasets (IDCCD) focused on Indonesian data with international caries detection and assessment system (ICDAS) classification D0 to D6. The results showed that YOLO V8-s and DETR gave good results, with mean average precision (mAP) of 41.8% and 41.3% for intersection over union (IoU) 50, and 24.3% and 26.2% for IoU 50:90. Precision-recall (PR) curves show that both models have high precision at low recall (0 to 0.2), but precision decreases sharply as recall increases. YOLO V8-s showed a slower and more regular decrease in precision, indicating a more stable performance compared to DETR.
APA, Harvard, Vancouver, ISO, and other styles
28

Rina, Putri Noer Fadilah Rasmi Rikmasari Saiful Akbar Arlette Suzy Setiawan. "IDCCD: evaluation of deep learning for early detection caries based on ICDAS." Indonesian Journal of Electrical Engineering and Computer Science 38, no. 1 (2025): 381–92. https://doi.org/10.11591/ijeecs.v38.i1.pp381-392.

Full text
Abstract:
Dental caries is a common oral disease in children, influenced by environmental, psychological, behavioral, and biological factors. The American academy of pediatric dentistry recommends screening from the time the first tooth erupts or at one year of age to prevent caries, which mostly affects children from racial and ethnic minorities. In Indonesia, the 2023 health survey reported a caries prevalence of 84.8% in children aged 5-9 years. This research introduces early caries detection using three deep learning models: faster-RCNN, you only look once (YOLO) V8, and detection transformer (DETR), using Indonesian dental caries characteristic datasets (IDCCD) focused on Indonesian data with international caries detection and assessment system (ICDAS) classification D0 to D6. The results showed that YOLO V8-s and DETR gave good results, with mean average precision (mAP) of 41.8% and 41.3% for intersection over union (IoU) 50, and 24.3% and 26.2% for IoU 50:90. Precision-recall (PR) curves show that both models have high precision at low recall (0 to 0.2), but precision decreases sharply as recall increases. YOLO V8-s showed a slower and more regular decrease in precision, indicating a more stable performance compared to DETR.
APA, Harvard, Vancouver, ISO, and other styles
29

TS, Prof Nishchitha. "Real Time Object Detection in Autonomous Vehicle Using Yolo V8." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48914.

Full text
Abstract:
Abstract Autonomous vehicles rely heavily on real-time object detection to ensure safe and efficient navigation in dynamic environments. This paper explores the implementation of YOLOv8 (You Only Look Once, version 8), a state-of-the-art deep learning model for object detection, within autonomous driving systems. YOLOv8 offers enhanced speed, accuracy, and lightweight deployment capabilities compared to its predecessors, making it highly suitable for real-time applications. The model is trained and evaluated on datasets such as KITTI and COCO to detect and classify various objects including pedestrians, vehicles, traffic signs, and lane markings. The integration of YOLOv8 with on-board vehicle sensors and edge computing units enables rapid inference and low-latency decision-making. Experimental results demonstrate that YOLOv8 achieves high mean average precision (mAP) with low computational overhead, affirming its potential for deployment in real-world autonomous driving scenarios. This work highlights the advantages of YOLOv8 in improving the perception module of self-driving cars and addresses challenges related to detection in complex, real-time traffic conditions. Keywords Real-Time Object Detection Autonomous Vehicles YOLOv8 Deep Learning Computer Vision Convolutional Neural Networks (CNNs) Traffic Scene Understanding Edge Computing Mean Average Precision (mAP) Self-Driving Cars Sensor Fusion Road Safety
APA, Harvard, Vancouver, ISO, and other styles
30

Hanna, Arini Parhusip, Trihandaru Suryasatriya, Indrajaya Denny, and Labadin Jane. "Implementation of YOLOv8-seg on store products to speed up the scanning process at point of sales." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 3 (2024): 3291–305. https://doi.org/10.11591/ijai.v13.i3.pp3291-3305.

Full text
Abstract:
You only look once v8 (YOLOv8)-seg and its variants are implemented to accelerate the collection of goods for a store for selling activity in Indonesia. The method used here is object detection and segmentation of these objects, a combination of detection and segmentation called instance segmentation. The novelty lies in the customization and optimization of YOLOv8-seg for detecting and segmenting 30 specific Indonesian products. The use of augmented data (125 images augmented into 1,250 images) enhances the model's ability to generalize and perform well in various scenarios. The small number of data points and the small number of epochs have proven reliable algorithms to implement on store products instead of using QR codes in a digital manner. Five models are examined, i.e., YOLOv8-seg, YOLOv8s-seg, YOLOv8m-seg, YOLOv8l-seg, and YOLOv8x-seg, with a data distribution of 64% for the training dataset, 16% for the validation dataset, and 20% for the testing dataset. The best model, YOLOv8l-seg, was obtained with the highest mean average precision (mAP)box value of 99.372% and a mAPmask value of 99.372% from testing the testing dataset. However, the YOLOv8m seg model can be the best alternative model with a mAPbox value of 99.330% since the number of parameters and the computational speed are the best compared to other models.
APA, Harvard, Vancouver, ISO, and other styles
31

Malik, Nirupama, and Amiyajyoti Nayak. "Artificial Intelligence System Based Personal Protective Equipment Detection for Construction Site Safety using YOLOv8." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 2675–83. https://doi.org/10.22214/ijraset.2025.67829.

Full text
Abstract:
Abstract: This paper addresses the limitations of current deep learning models for detecting Personal Protective Equipment (PPE) on construction sites, where performance enhancement is crucial. This paper use You Only Look Once (YOLO) architecture, focusing on ten categories: 'Hardhat', 'Mask', 'NO-Hardhat', 'NO-Mask', 'NO-Safety Vest', 'Person', 'Safety Cone', 'Safety Vest', 'machinery', 'vehicle'. A new high-quality dataset, named PPE dataset from Roboflow, was created, comprising 1,330 images that reflect real construction environments, various poses, angles, distances, and multiple PPE types. Among the evaluated models, YOLO v8 achieved the highest mean Average Precision (mAP) of 87.55%, while YOLO v8 demonstrated the fastest processing speed at 52 images per second on a GPU. The study involved training a model using 2,934 images and validating it with 816, resulting in a 95% mean Average Precision (mAP). It underscores the significant role of artificial intelligence in enhancing safety management and occupational health within the construction sector. This research serves as a foundation for future advancements in AI-driven safety measures, addressing the urgent need for innovative strategies to minimize workplace risks and elevate compliance standards in the industry.
APA, Harvard, Vancouver, ISO, and other styles
32

Hutabarat, Rizky Theofilus, and Robert Kurniawan. "Deteksi Sampah di Permukaan Sungai menggunakan Convolutional Neural Network dengan Algoritma YOLOv8." Seminar Nasional Official Statistics 2024, no. 1 (2024): 537–48. http://dx.doi.org/10.34123/semnasoffstat.v2024i1.2099.

Full text
Abstract:
Meningkatnya jumlah sampah padat di sungai menjadi salah satu masalah utama di daerah perkotaan, karena sungai yang dipenuhi sampah bisa berujung pada masalah seperti banjir atau berbagai penyakit. Tujuan dari penelitian ini adalah untuk membangun model object detection menggunakan Convolutional Neural Network (CNN) dengan algoritma YOLOv8 (You Only Look Once v8), dan mengimplementasikan model tersebut untuk mendeteksi sampah mengapung di permukaan Sungai Ciliwung. Model yang digunakan adalah YOLOv8, karena terkenal dengan kecepatan dan akurasi yang tinggi. Data yang digunakan untuk membangun dataset dikumpulkan dari Google Images dan YouTube, serta gambar yang diambil menggunakan smartphone pribadi di Sungai Ciliwung secara langsung. Hasil yang diperoleh adalah epoch terbaik berada di epoch ke-177, dengan nilai Precision sebesar 84.02%, Recall sebesar 91.03%, Accuracy sebesar 77.6%, dan F1-Score sebesar 87.38%. Kesimpulan yang dapat diambil adalah model yang dibangun dengan algoritma YOLOv8 dapat digunakan untuk mendeteksi sampah mengapung di permukaan Sungai Ciliwung.
APA, Harvard, Vancouver, ISO, and other styles
33

Nusman, Bayu, Aviv Yuniar Rahman, and Rangga Pahlevi Putera. "LOBSTER AGE DETECTION USING DIGITAL VIDEO-BASED YOLO V8 ALGORITHM." Jurnal Teknik Informatika (Jutif) 5, no. 4 (2024): 1155–63. https://doi.org/10.52436/1.jutif.2024.5.4.2144.

Full text
Abstract:
Lobster is an aquatic animal that has high economic value in the fishing industry. Demand for lobster in both domestic and export markets continues to increase thanks to its delicious meat and a variety of desirable dishes. Indonesia, especially Java Island, contributes significantly to the national lobster production. However, the current manual determination of lobster age has limitations such as complexity, time required, and subjectivity in assessment.To overcome this problem, this research proposes the detection of lobster age using the YOLO (You Only Look Once) method, specifically the YOLOv8 version. This algorithm is known to be able to perform image and video recognition quickly and produce high accuracy. YOLOv8 can be run using a GPU, enabling parallel operations that significantly increase the speed of object detection compared to using a CPU alone. The data processing in this study involves several stages, starting from pre-processing in the form of image extraction and bounding from lobster videos. Next, the YOLOv8 algorithm was used to train the model with customized grid and bounding box parameters. The trained model is then validated and tested using lobster image and video data. The results of the test show that the developed YOLOv8 model has a precision of 0.997, recall of 0.998, mAP50 of 0.995, and mAP50-95 of 0.971. This shows that the model is able to detect and determine the age of the lobster with very high accuracy, providing a more efficient and objective solution than the manual method.
APA, Harvard, Vancouver, ISO, and other styles
34

Pawłowski, Jakub, Marcin Kołodziej, and Andrzej Majkowski. "Implementing YOLO Convolutional Neural Network for Seed Size Detection." Applied Sciences 14, no. 14 (2024): 6294. http://dx.doi.org/10.3390/app14146294.

Full text
Abstract:
The article presents research on the application of image processing techniques and convolutional neural networks (CNN) for the detection and measurement of seed sizes, specifically focusing on coffee and white bean seeds. The primary objective of the study is to evaluate the potential of using CNNs to develop tools that automate seed recognition and measurement in images. A database was created, containing photographs of coffee and white bean seeds with precise annotations of their location and type. Image processing techniques and You Only Look Once v8 (YOLO) models were employed to analyze the seeds’ position, size, and type. A detailed comparison of the effectiveness and performance of the applied methods was conducted. The experiments demonstrated that the best-trained CNN model achieved a segmentation accuracy of 90.1% IoU, with an average seed size error of 0.58 mm. The conclusions indicate a significant potential for using image processing techniques and CNN models in automating seed analysis processes, which could lead to increased efficiency and accuracy in these processes.
APA, Harvard, Vancouver, ISO, and other styles
35

Sk, Sadik. "Effective Traffic Signal Control System Using Deep Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem31815.

Full text
Abstract:
Traffic congestion is becoming a serious problem with a large number of vehicles on the roads. The present traffic system is a timer-based system that operates irrespective of the amount of traffic if there exists an ambulance. So, this Deep Learning project is designed in such a way that the traffic control system is based on vehicle density in a lane and also detecting the ambulance’s lane and let that particular lane pass considering as a first priority. In fact, we use computer vision to have the characteristics of the competing traffic rows at the signals. This is done by a object detection model based on a Deep Learning model called You Only Look Once (YOLO)v8. Then traffic signal phases are optimized according to collected data, mainly queue density and waiting time per vehicle, to enable as much as more vehicles to pass safely with minimum waiting time. Keywords— Object detection, ambulance, YOLOv8, Deep learning
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Jianing. "Evolution of YOLO: A Comparative Analysis of YOLOv5, YOLOv8, and YOLOv10." Applied and Computational Engineering 146, no. 1 (2025): 15–23. https://doi.org/10.54254/2755-2721/2025.21591.

Full text
Abstract:
This paper presents a systematic comparative analysis of three versions of the YOLO (You Only Look Once) target detection algorithm - YOLOv5, YOLOv8 and YOLOv10. Through experiments on the VOC2012 dataset (converted to COCO format), this paper evaluates the versions in terms of multiple dimensions such as detection performance, inference speed and model complexity. The experimental results show that the detection accuracy and robustness significantly improve with version iteration and the mAP of v8 v10 is improved by 6.69% and 9.12% relative to v5, However, the number of model parameters increases by 68.98% and 48.66, and the FLOPS increases by 94.08% and 91.51%, respectively, which leads to an increased demand for computational resources and a slight decrease in inference speed compared to the old version, especially in practical application scenarios with limited resources.This paper not only demonstrates the continuous progress of the network structure and training strategy, but also explores the balance between performance and efficiency in real-time target detection, which provides references and insights for the future development of related technologies.
APA, Harvard, Vancouver, ISO, and other styles
37

Şimşek, Mehmet Ali, and Ahmet Sertbaş. "AUTOMATIC DETECTION OF MENISCUS TEARS FROM KNEE MRI IMAGES USING DEEP LEARNING: YOLO V8, V9, AND V10 SERIES." Kahramanmaraş Sütçü İmam Üniversitesi Mühendislik Bilimleri Dergisi 28, no. 1 (2025): 292–308. https://doi.org/10.17780/ksujes.1559862.

Full text
Abstract:
Meniscal tears are a disease that occurs in the knee joint and negatively affects people's mobility. In this study, the performance of the state-of-the-art (SOTA) YOLO (You Only Look Once) models, in particular YOLOv8l, YOLOv8x, YOLOv9c, YOLOv9e, YOLOv10l, and YOLOv10x, for the detection of meniscal tears was investigated. The algorithms were trained and tested with data from magnetic resonance imaging (MRI). In our study, the YOLOv9e model showed the highest performance and achieved the best results in the training phase with a mAP50 of 0.91807, a precision of 0.87684, a recall of 0.93871 and an F1 score of 0.90672. This study makes a unique contribution to the field with its advanced algorithms and comprehensive performance analysis. The findings show that deep learning algorithms are suitable for clinical use in the automatic detection and localization of meniscal tears. In this way, the possibility of early diagnosis increases, and patients can be directed to the right treatment, preventing joint problems that may occur in the future. In future studies, it is aimed to increase the generalization capabilities of the models with larger data sets and different anatomical structures.
APA, Harvard, Vancouver, ISO, and other styles
38

Motru, Vijaya Raju, Subbarao P. Krishna, and Babu A. Sudhir. "Early disease detection of black gram plant leaf using cloud computing based YOLO V8 model." i-manager's Journal on Information Technology 12, no. 4 (2023): 18. http://dx.doi.org/10.26634/jit.12.4.20209.

Full text
Abstract:
Plant diseases pose a major threat to agricultural productivity and economies dependent on it. Monitoring plant growth and phenotypes is vital for early disease detection. In Indian agriculture, black-gram (Vigna mungo) is an important pulse crop afflicted by viral infections like Urdbean Leaf Crinkle Virus (ULCV), causing stunted growth and crinkled leaves. Such viral epidemics lead to massive crop losses and financial distress for farmers. According to the FAO, plant diseases cost countries $220 billion annually. Hence, there is a need for quick and accurate diagnosis of crop diseases like ULCV. Recent advances in computer vision and image processing provide promising techniques for automated non-invasive disease detection using leaf images. The key steps involve image pre-processing, segmentation, informative feature extraction, and training machine learning models for reliable classification. In this work, an automated ULCV detection system is developed using black gram leaf images. The Grey Level Co-occurrence Matrix (GLCM) technique extracts discriminative features from leaves. Subsequently, a deep convolutional neural network called YOLO (You Only Look Once) is leveraged to accurately diagnose ULCV based on the extracted features. Extensive experiments demonstrate the effectiveness of the GLCM-YOLO pipeline in identifying ULCV-infected leaves with high precision. Such automated diagnosis can aid farmers by providing early disease alerts, thereby reducing crop losses due to viral epidemics.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhong, Zixuan. "Pedestrian detection and gender recognition utilizing YOLO and CNN algorithms." Applied and Computational Engineering 31, no. 1 (2024): 133–38. http://dx.doi.org/10.54254/2755-2721/31/20230136.

Full text
Abstract:
As crowd-based activities continue to surge in locales such as markets and restaurants, the significance of understanding pedestrian flow is increasingly evident. Over recent years, advancements in dynamic pedestrian detection, facilitated by the YOLO (You Only Look Once) algorithm, have seen widespread application in areas like crowd management and occupancy estimation. The YOLO algorithm has demonstrated high accuracy and efficiency in real-time object tracking and counting. However, for specific use cases, data derived solely from monitoring pedestrian flows may prove inadequate. This study presents YOLO-Gender, a system leveraging YOLO and Convolutional Neural Network (CNN) for pedestrian tracking and gender classification. The objective is to enhance the richness of data extracted from surveillance camera footage, thus rendering it more valuable for societal applications. The YOLO suite of algorithms, hailed for their superior performance and rapid iteration speed, is among the most extensively utilized tools in the field. The proposed system is predicated on YOLO v8, the most advanced iteration of the YOLO algorithm, released in 2023, which boasts its highest accuracy to date.
APA, Harvard, Vancouver, ISO, and other styles
40

Khubrani, Mousa Mohammed, Fathe Jeribi, Ali Tahir, and Abdulnasser Abdulwakil Metwally. "Panoramic Dental X-Ray Restorative Elements Segmentation using Hybrid Deep Learning." WSEAS TRANSACTIONS ON COMPUTERS 23 (December 31, 2024): 328–35. https://doi.org/10.37394/23205.2024.23.32.

Full text
Abstract:
Panoramic radiography is a commonly used imaging technique for dental X-rays, it is used as a diagnostics tool in dentistry. The study introduced a hybrid deep learning approach for detecting and segmenting dental restorative elements from panoramic dental X-rays. By integrating the You Look Only Once (YOLO v8) model for object detection and the Segment Anything Model (SAM) for segmentation, the aim is to enhance the identification of different dental restorative elements such as dental implants, crowns, fillings, and root canals. The datasets of the study comprised 1290 dental X-ray images. The YOLO model effectively recognizes regions of interest and generates bounding boxes and then for achieving precise segmentation SAM is utilized. The results demonstrate high accuracy for classification rates of 95% for fillings, 88% for crowns, 93% for root canals, and 97% for implants and the Intersection over Union (IoU) metrics results also improve systems accuracy. The results show significant improvement in accuracy and highlight the effectiveness of the hybrid approach in refining diagnostic precision and enhancing efficiency in dental imaging.
APA, Harvard, Vancouver, ISO, and other styles
41

Cheng, Yuxuan, Yidan Huang, Jingjing Zhang, Xuehong Zhang, Qiaohua Wang, and Wei Fan. "Robust Detection of Cracked Eggs Using a Multi-Domain Training Method for Practical Egg Production." Foods 13, no. 15 (2024): 2313. http://dx.doi.org/10.3390/foods13152313.

Full text
Abstract:
The presence of cracks reduces egg quality and safety, and can easily cause food safety hazards to consumers. Machine vision-based methods for cracked egg detection have achieved significant success on in-domain egg data. However, the performance of deep learning models usually decreases under practical industrial scenarios, such as the different egg varieties, origins, and environmental changes. Existing researches that rely on improving network structures or increasing training data volumes cannot effectively solve the problem of model performance decline on unknown egg testing data in practical egg production. To address these challenges, a novel and robust detection method is proposed to extract max domain-invariant features to enhance the model performance on unknown test egg data. Firstly, multi-domain egg data are built on different egg origins and acquisition devices. Then, a multi-domain trained strategy is established by using Maximum Mean Discrepancy with Normalized Squared Feature Estimation (NSFE-MMD) to obtain the optimal matching egg training domain. With the NSFE-MMD method, the original deep learning model can be applied without network structure improvements, which reduces the extremely complex tuning process and hyperparameter adjustments. Finally, robust cracked egg detection experiments are carried out on several unknown testing egg domains. The YOLOV5 (You Only Look Once v5) model trained by the proposed multi-domain training method with NSFE-MMD has a detection mAP of 86.6% on the unknown test Domain 4, and the YOLOV8 (You Only Look Once v8) model has a detection mAP of 88.8% on Domain 4, which is an increase of 8% and 4.4% compared to the best performance of models trained on a single domain, and an increase of 4.7% and 3.7% compared to models trained on all domains. In addition, the YOLOV5 model trained by the proposed multi-domain training method has a detection mAP of 87.9% on egg data of the unknown testing Domain 5. The experimental results demonstrate the robustness and effectiveness of the proposed multi-domain training method, which can be more suitable for the large quantity and variety of egg detection production.
APA, Harvard, Vancouver, ISO, and other styles
42

Venkateswarlu, K. "YOLO Based Advanced Smart Traffic Assistance Platform for Number Plate and Helmet Detection." International Journal for Research in Applied Science and Engineering Technology 11, no. 6 (2023): 4204–8. http://dx.doi.org/10.22214/ijraset.2023.54414.

Full text
Abstract:
Abstract: Now a days road accidents are one of the major causes that are leading to human death. However the most common reason for motorcycle deaths is because many fail to confirm to the law of wearing helmet. Here is the software using YOLO V8 to recognize the motorbike drivers , who are not obeying helmet law in an automated way. The helmet and license plate detection system using YOLO V8 is a computer vision technology-based system that utilizesthe You Only Look Once (YOLO) objectdetection algorithm to detect helmets and license plates in real-time. The system is designed to improve safety on roads and highways by detecting riders without helmets and vehicles without properlicense plates. The system consists of motorcycledetection , helmet and no helmet detection as well as bike license plate recognition. The system is capable of processing image from a variety of sources, including trafficcameras and drones, and can detect the presence or absence of helmets and license plates in the image frames. It uses a deep learning model trained on a large dataset of annotated images to identify and classify objects. The output of the system includes a bounding box around each detected object and a label indicating whether it is a helmet or a license plate. The system can also be configured to generate alerts or notifications when violations are detected. Overall, this system provides a valuable tool for law.
APA, Harvard, Vancouver, ISO, and other styles
43

Xia, Chen. "Rapid Strawberry Ripeness Detection And 3D Localization of Picking Point Based on Improved YOLO V8-Pose with RGB-Camera." Journal of Electrical Systems 20, no. 3s (2024): 2171–81. http://dx.doi.org/10.52783/jes.1840.

Full text
Abstract:
Accurate identification of strawberries at different growth stages as well as determination of optimal picking points by strawberry picking robots is a key issue in the field of agricultural automation. In this paper, a fast detection method of strawberry ripeness and picking point based on improved YOLO V8-Pose (You Only Look Once) and RGB-D depth camera is proposed to address this problem. By comparing the YOLO v5-Pose, YOLO v7-Pose, and YOLO v8-Pose models, it is determined to use the YOLO v8-Pose model as the fundamental model for strawberry ripeness and picking point detection. For the sake of further improving the accuracy of the model detection, this paper makes targeted improvements: all the Concat modules at the Neck part are replaced with BiFPN richer feature fusion, which enhances the global feature extraction capability of the model; the MobileViTv3 framework is employed to restructure the backbone network, thereby augmenting the model's capacity for contextual feature extraction. Subsequently, the output-side CIoU loss function is supplanted with the SIoU loss function, leading to an acceleration in the model's convergence. The enhanced YOLO v8-Pose demonstrates a 97.85% mAP-kp value, reflecting a 5.49% improvement over the initial model configuration.. For the sake of accurately localizing the three-dimensional information of strawberry picking points, the strawberry picking points are further projected into the corresponding depth information to obtain their three-dimensional information. The experimental results show that the mean absolute error and the mean absolute percentage error of strawberry picking point localization in this paper are 0.63 cm and 1.16%, respectively. In this study, we introduce a method capable of concurrently detecting strawberry maturity and identifying the precise harvesting location while accurately localizing the picking point. This investigation holds considerable theoretical and pragmatic relevance in augmenting the intelligence of strawberry harvesting robots and actualizing automation and smart capabilities in agricultural production.
APA, Harvard, Vancouver, ISO, and other styles
44

Mustafa, Zaid, and Heba Nsour. "Using Computer Vision Techniques to Automatically Detect Abnormalities in Chest X-rays." Diagnostics 13, no. 18 (2023): 2979. http://dx.doi.org/10.3390/diagnostics13182979.

Full text
Abstract:
Our research focused on creating an advanced machine-learning algorithm that accurately detects anomalies in chest X-ray images to provide healthcare professionals with a reliable tool for diagnosing various lung conditions. To achieve this, we analysed a vast collection of X-ray images and utilised sophisticated visual analysis techniques; such as deep learning (DL) algorithms, object recognition, and categorisation models. To create our model, we used a large training dataset of chest X-rays, which provided valuable information for visualising and categorising abnormalities. We also utilised various data augmentation methods; such as scaling, rotation, and imitation; to increase the diversity of images used for training. We adopted the widely used You Only Look Once (YOLO) v8 algorithm, an object recognition paradigm that has demonstrated positive outcomes in computer vision applications, and modified it to classify X-ray images into distinct categories; such as respiratory infections, tuberculosis (TB), and lung nodules. It was particularly effective in identifying unique and crucial outcomes that may, otherwise, be difficult to detect using traditional diagnostic methods. Our findings demonstrate that healthcare practitioners can reliably use machine learning (ML) algorithms to diagnose various lung disorders with greater accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
45

Di Cecio, G., A. Manco, and G. Gigante. "On-board drone classification with Deep Learning and System-on-Chip implementation." Journal of Physics: Conference Series 2716, no. 1 (2024): 012059. http://dx.doi.org/10.1088/1742-6596/2716/1/012059.

Full text
Abstract:
Abstract In recent years the increasing use of drones has raised significant concerns on safety and make them dramatic threats to security. To address these worries Counter-UAS Systems (CUS) are capturing the interest of research and of industry. Consequently, the development of effective drone detection technologies has become a critical research focus. The proposed work explores the application of edge computing to drone classification. It tunes a Deep Learning model, You Only Look Once (YOLO), and implements it on a Field Programmable Gate Array (FPGA) technology. FPGAs are considered advantageous over conventional processors since they enable parallelism and can be used to create high-speed, low-power, and low-latency circuit designs and so to satisfy the stringent Size, weight and Power (SWaP) requirements of a drone-based implementation. In details, two different YOLO neural networks YOLO v3 and v8 are trained and evaluated on a large data set constructed with drones’ images at various distances. The two models are then implemented on a System-on-Chip (SoC). In order to demonstrate the feasibility of a drone on board image Artificial Intelligence processing, the evaluation assesses the accuracy of classification and the computational performances such as latency.
APA, Harvard, Vancouver, ISO, and other styles
46

Luo, Shuyuan, Jun Zhao, Bin Jia, and QingSong Niu. "BFL-YOLOv8: surface defect detection algorithm for titanium alloy plates based on improved YOLOv8." Insight - Non-Destructive Testing and Condition Monitoring 67, no. 4 (2025): 208–14. https://doi.org/10.1784/insi.2025.67.4.208.

Full text
Abstract:
Defect detection in titanium alloy plates is a crucial step in industrial production, but surface defects are often too minute and subject to lighting, leading to missed detections in industrial environments. To address this issue, an improved defect detection algorithm, BFL-YOLOv8, based on the you only look once v8 (YOLOv8) model, is proposed to detect common surface defects on titanium alloy plates, such as cracks, spots and scratches. The system incorporates the BiFormer attention mechanism to enhance the ability of the model to extract and process image features, integrates the bidirectional feature pyramid network (BiFPN) for weighted fusion and bidirectional cross-scale connections and further uses the regularisation flow technique of the LLFlow algorithm to eliminate the interference of highlights and shadows in the dataset. The experimental results show that the BFL-YOLOv8 achieves a mean average precision (mAP) of 93.8% on the titanium alloy plate defect dataset, an 8.6% improvement over the original YOLOv8 model, and balances detection accuracy and speed well. It demonstrates excellent detection abilities compared to other similar target detection models and can be applied to defect detection tasks for titanium alloy plates in various complex environments.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Mengchen, and Zhenyou Zhang. "Research on Vehicle Target Detection Method Based on Improved YOLOv8." Applied Sciences 15, no. 10 (2025): 5546. https://doi.org/10.3390/app15105546.

Full text
Abstract:
To improve the performance of vehicle target detection in complex traffic environments and solve the problem that it is difficult to make a lightweight detection model, this paper proposes a lightweight vehicle detection model based on enhanced You Only Look Once v8. This method improves the feature extraction aggregation network by introducing an Adaptive Downsampling module, which can dynamically adjust the downsampling method, thereby increasing the model’s attention to key features, especially for small objects and occluded objects, while maintaining a lightweight structure, effectively reducing the model complexity while improving detection accuracy. A Lightweight Shared Convolution Detection Head was designed. By designing a shared convolution layer through group normalization, the detection head of the original model was improved, which can reduce redundant calculations and parameters and enhance the ability of global information fusion between feature maps, thereby achieving the purpose of improving computational efficiency. When tested in the KITTI 2D and UA-DETRAC datasets, the mAP of the proposed model was improved by 1.1% and 2.0%, respectively, the FPS was improved by 12% and 11%, respectively, the number of parameters was reduced by 33%, and the FLOPs were reduced by 28%.
APA, Harvard, Vancouver, ISO, and other styles
48

Yandouzi, Mimoun, Mohammed Berrahal, Mounir Grari, et al. "Semantic segmentation and thermal imaging for forest fires detection and monitoring by drones." Bulletin of Electrical Engineering and Informatics 13, no. 4 (2024): 2784–96. http://dx.doi.org/10.11591/eei.v13i4.7663.

Full text
Abstract:
Forest ecosystems play a crucial role in providing a wide range of ecological, social, and economic benefits. However, the increasing frequency and severity of forest fires pose a significant threat to the sustainability of forests and their functions, highlighting the need for early detection and swift action to mitigate damage. The combination of drones and artificial intelligence, particularly deep learning, proves to be a cost-effective solution for accurately and efficiently detecting forest fires in real-time. Deep learning-based image segmentation models can not only be employed for forest fire detection but also play a vital role in damage assessment and support reforestation efforts. Furthermore, the integration of thermal cameras on drones can significantly enhance the sensitivity in forest fire detection. This study undertakes an in-depth analysis of recent advancements in deep learning-based semantic segmentation, with a particular focus on model’s mask region convolutional neural network (Mask R-CNN) and you only look once (YOLO) v5, v7, and v8 variants. Emphasis is placed on their suitability for forest fire monitoring using drones equipped with RGB and/or thermal cameras. The conducted experiments have yielded encouraging outcomes across various metrics, underscoring its significance as an invaluable asset for both fire detection and continuous monitoring endeavors.
APA, Harvard, Vancouver, ISO, and other styles
49

Kufel, Jakub, Katarzyna Bargieł-Łączek, Maciej Koźlik, et al. "Chest X-ray Foreign Objects Detection Using Artificial Intelligence." Journal of Clinical Medicine 12, no. 18 (2023): 5841. http://dx.doi.org/10.3390/jcm12185841.

Full text
Abstract:
Diagnostic imaging has become an integral part of the healthcare system. In recent years, scientists around the world have been working on artificial intelligence-based tools that help in achieving better and faster diagnoses. Their accuracy is crucial for successful treatment, especially for imaging diagnostics. This study used a deep convolutional neural network to detect four categories of objects on digital chest X-ray images. The data were obtained from the publicly available National Institutes of Health (NIH) Chest X-ray (CXR) Dataset. In total, 112,120 CXRs from 30,805 patients were manually checked for foreign objects: vascular port, shoulder endoprosthesis, necklace, and implantable cardioverter-defibrillator (ICD). Then, they were annotated with the use of a computer program, and the necessary image preprocessing was performed, such as resizing, normalization, and cropping. The object detection model was trained using the You Only Look Once v8 architecture and the Ultralytics framework. The results showed not only that the obtained average precision of foreign object detection on the CXR was 0.815 but also that the model can be useful in detecting foreign objects on the CXR images. Models of this type may be used as a tool for specialists, in particular, with the growing popularity of radiology comes an increasing workload. We are optimistic that it could accelerate and facilitate the work to provide a faster diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Ziyi, Xin Lan, and Hui Wang. "Comparative Analysis of YOLO Series Algorithms for UAV-Based Highway Distress Inspection: Performance and Application Insights." Sensors 25, no. 5 (2025): 1475. https://doi.org/10.3390/s25051475.

Full text
Abstract:
Established unmanned aerial vehicle (UAV) highway distress detection (HDD) faces the dual challenges of accuracy and efficiency, this paper conducted a comparative study on the application of the YOLO (You Only Look Once) series of algorithms in UAV-based HDD to provide a reference for the selection of models. YOLOv5-l and v9-c achieved the highest detection accuracy, with YOLOv5-l performing well in mean and classification detection precision and recall, while YOLOv9-c showed poor performance in these aspects. In terms of detection efficiency, YOLOv10-n, v7-t, and v11-n achieved the highest levels, while YOLOv5-n, v8-n, and v10-n had the smallest model sizes. Notably, YOLOv11-n was the best-performing model in terms of combined detection efficiency, model size, and computational complexity, making it a promising candidate for embedded real-time HDD. YOLOv5-s and v11-s were found to balance detection accuracy and model lightweightness, although their efficiency was only average. When comparing t/n and l/c versions, the changes in the backbone network of YOLOv9 had the greatest impact on detection accuracy, followed by the network depth_multiple and width_multiple of YOLOv5. The relative compression degrees of YOLOv5-n and YOLOv8-n were the highest, and v9-t achieved the greatest efficiency improvement in UAV HDD, followed by YOLOv10-n and v11-n.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography