To see the other types of publications on this topic, follow the link: Pest detection.

Journal articles on the topic 'Pest detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Pest detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rana, Harshil, and Reema Pandya. "Pest Detection System." International Journal of Computer Sciences and Engineering 9, no. 12 (2021): 23–25. http://dx.doi.org/10.26438/ijcse/v9i12.2325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Boyu, Jianji Wang, Minghui Guo, Miao Chen, Yanan Chen, and Yisheng Miao. "Overview of Pest Detection and Recognition Algorithms." Electronics 13, no. 15 (2024): 3008. http://dx.doi.org/10.3390/electronics13153008.

Full text
Abstract:
Detecting and recognizing pests are paramount for ensuring the healthy growth of crops, maintaining ecological balance, and enhancing food production. With the advancement of artificial intelligence technologies, traditional pest detection and recognition algorithms based on manually selected pest features have gradually been substituted by deep learning-based algorithms. In this review paper, we first introduce the primary neural network architectures and evaluation metrics in the field of pest detection and pest recognition. Subsequently, we summarize widely used public datasets for pest detection and recognition. Following this, we present various pest detection and recognition algorithms proposed in recent years, providing detailed descriptions of each algorithm and their respective performance metrics. Finally, we outline the challenges that current deep learning-based pest detection and recognition algorithms encounter and propose future research directions for related algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Ruixue, Fengqi Hao, and Dexin Ma. "Research on Polygon Pest-Infected Leaf Region Detection Based on YOLOv8." Agriculture 13, no. 12 (2023): 2253. http://dx.doi.org/10.3390/agriculture13122253.

Full text
Abstract:
Object detection in deep learning provides a viable solution for detecting crop-pest-infected regions. However, existing rectangle-based object detection methods are insufficient to accurately detect the shape of pest-infected regions. In addition, the method based on instance segmentation has a weak ability to detect the pest-infected regions at the edge of the leaves, resulting in unsatisfactory detection results. To solve these problems, we constructed a new polygon annotation dataset called PolyCorn, designed specifically for detecting corn leaf pest-infected regions. This was made to address the scarcity of polygon object detection datasets. Building upon this, we proposed a novel object detection model named Poly-YOLOv8, which can accurately and efficiently detect corn leaf pest-infected regions. Furthermore, we designed a loss calculation algorithm that is insensitive to ordering, thereby enhancing the robustness of the model. Simultaneously, we introduced a loss scaling factor based on the perimeter of the polygon, improving the detection ability for small objects. We constructed comparative experiments, and the results demonstrate that Poly-YOLOv8 outperformed other models in detecting irregularly shaped pest-infected regions, achieving 67.26% in mean average precision under 0.5 threshold (mAP50) and 128.5 in frames per second (FPS).
APA, Harvard, Vancouver, ISO, and other styles
4

P. Radha, V. Arockia Mary Epsy,. "Pest Detection Using Image Denoising and Cascaded Unet Segmentation for Pest Images." Tuijin Jishu/Journal of Propulsion Technology 44, no. 4 (2023): 1359–71. http://dx.doi.org/10.52783/tjjpt.v44.i4.1040.

Full text
Abstract:
This study proposes a novel approach for pest detection in pest images using image denoising and cascaded UNET segmentation. The proposed approach involves the use of a hybrid neural network RESNET50 with CNN for image dataset training, optimized using Believed Adam Optimization. The images are then preprocessed using a superior MLP model for image denoising, which enhances the image quality and reduces the noise present in the image. The images are then passed through an adaptive UNET architecture for image segmentation, which is based on domain adaptation and semantic segmentation. The cascaded UNET segmentation improves the segmentation accuracy, and the domain adaptation ensures that the model can be applied to new datasets without requiring additional training. The proposed approach achieves a high accuracy rate of 98.5% in detecting pests images. This approach can be used in various applications related to pest detection and management, including agriculture and pest control.
APA, Harvard, Vancouver, ISO, and other styles
5

Doan, Thanh-Nghi. "Large-Scale Insect Detection With Fine-Tuning YOLOX." International Journal of Membrane Science and Technology 10, no. 2 (2023): 892–915. http://dx.doi.org/10.15379/ijmst.v10i2.1306.

Full text
Abstract:
With the aim of detecting insect pests at an early stage, there has been an increasing demand for insect pest detection and classification, particularly in large-scale setups. Therefore, the aim of this research is to introduce a new real-time pest detection technique using a deep convolutional neural network, which not only offers improved accuracy but also faster speed and less computational effort. The networks were constructed using various modern object detector models such as YOLOv4, YOLOv5, and YOLOX. Our proposed networks were evaluated on a standard large-scale insect pest dataset, IP102, as well as on our collected dataset, Insect10. The experimental results demonstrate that our system surpasses previous methods and achieves satisfactory performance with 84.84% mAP on the Insect10 dataset and 54.19% mAP on the IP102 dataset. Our system can deliver precise and real-time pest detection and identification for agricultural crops, enabling highly accurate end-to-end pest detection that can be applied in realistic farming scenarios.
APA, Harvard, Vancouver, ISO, and other styles
6

Fang, Hao, Binbin Shi, Yongpeng Sun, Neal Xiong, and Lijuan Zhang. "APest-YOLO: A Multi-Scale Agricultural Pest Detection Model Based on Deep Learning." Applied Engineering in Agriculture 40, no. 5 (2024): 553–64. http://dx.doi.org/10.13031/aea.15987.

Full text
Abstract:
HighlightsWe propose a APest-YOLO model, an innovative agricultural pest detection model founded on a lightweight approach, thus improving the efficiency of pest detection while also reducing the model’s dimensions.The model incorporates a novel grouping atrous spatial pyramid pooling fast module with four convolution layers to enhance multi-scale pest feature representation, aiming for improved detection accuracy. Additionally, it utilizes a convolutional block attention module to reduce noise and complexity in background images, facilitating the extraction of more refined and smoother pest features for accurate detection.We conducted experiments on agricultural pest detection using a comprehensive multi-pest dataset. The APest-YOLO model surpasses existing detection models in terms of mAP0.5, mAP0.5:0.95.Abstract. Crop pests and diseases pose a significant threat to smart agriculture, making pest detection a critical component in agricultural applications. However, current detection methods often struggle to effectively identify multi-scale pest data. In response, we present a novel agricultural pest detection model (APest-YOLO) based on a lightweight approach. The APest-YOLO model enhances pest detection efficiency while reducing model size, which is different from the baseline models. Our model features an original grouping atrous spatial pyramid pooling fast module, comprising four convolution layers with varying rates to capture multi-scale and multi-level pest characteristics. Additionally, we incorporate a convolutional block attention module to extract smoother features from pest images with noisy and complex backgrounds. We evaluated the APest-YOLO model on a large-scale multi-pest dataset encompassing 37 pest species. Furthermore, the APest-YOLO model achieved 99.3% mAP0.5 and found that it outperforms baseline models, demonstrating effective pest species detection capabilities. Keywords: Attention mechanism, Convolutional neural network, Intelligent agriculture, Pest detection, YOLO.
APA, Harvard, Vancouver, ISO, and other styles
7

Yin, Jianjun, Pengfei Huang, Deqin Xiao, and Bin Zhang. "A Lightweight Rice Pest Detection Algorithm Using Improved Attention Mechanism and YOLOv8." Agriculture 14, no. 7 (2024): 1052. http://dx.doi.org/10.3390/agriculture14071052.

Full text
Abstract:
Intelligent pest detection algorithms are capable of effectively detecting and recognizing agricultural pests, providing important recommendations for field pest control. However, existing recognition models have shortcomings such as poor accuracy or a large number of parameters. Therefore, this study proposes a lightweight and accurate rice pest detection algorithm based on improved YOLOv8. Firstly, a Multi-branch Convolutional Block Attention Module (M-CBAM) is constructed in the YOLOv8 network to enhance the feature extraction capability for pest targets, yielding better detection results. Secondly, the Minimum Points Distance Intersection over Union (MPDIoU) is introduced as a bounding box loss metric, enabling faster model convergence and improved detection results. Lastly, lightweight Ghost convolutional modules are utilized to significantly reduce model parameters while maintaining optimal detection performance. The experimental results demonstrate that the proposed method outperforms other detection models, with improvements observed in all evaluation metrics compared to the baseline model. On the test set, this method achieves a detection average precision of 95.8% and an F1-score of 94.6%, with a model parameter of 2.15 M, meeting the requirements of both accuracy and lightweightness. The efficacy of this approach is validated by the experimental findings, which provide specific solutions and technical references for intelligent pest detection.
APA, Harvard, Vancouver, ISO, and other styles
8

Elci, Brundha, and Moulyashree S. "Pest Detection System for Farmers." International Research Journal of Computer Science 12, no. 04 (2025): 171–76. https://doi.org/10.26562/irjcs.2025.v1204.10.

Full text
Abstract:
This paper presents lightweight web-based pest detection software that aids in the early detection of crop pests using image classification techniques. The system is designed to support real-time predictions, user-friendly dashboards, and access control based on role-based logins. Built using React.js, Node.js, and MySQL, it can support up to 1000 user records with high efficiency. The software also integrates graphical visualizations using Recharts, helping users track pest prediction history and class distribution with confidence levels. This solution aims to improve agricultural productivity by offering a simple, accessible diagnostic tool for farmers, researchers, and agronomists.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Dayang, Feng Lv, Jingtao Guo, Huiting Zhang, and Liangkuan Zhu. "Detection of Forestry Pests Based on Improved YOLOv5 and Transfer Learning." Forests 14, no. 7 (2023): 1484. http://dx.doi.org/10.3390/f14071484.

Full text
Abstract:
Infestations or parasitism by forestry pests can lead to adverse consequences for tree growth, development, and overall tree quality, ultimately resulting in ecological degradation. The identification and localization of forestry pests are of utmost importance for effective pest control within forest ecosystems. To tackle the challenges posed by variations in pest poses and similarities between different classes, this study introduced a novel end-to-end pest detection algorithm that leverages deep convolutional neural networks (CNNs) and a transfer learning technique. The basic architecture of the method is YOLOv5s, and the C2f module is adopted to replace part of the C3 module to obtain richer gradient information. In addition, the DyHead module is applied to improve the size, task, and spatial awareness of the model. To optimize network parameters and enhance pest detection ability, the model is initially trained using an agricultural pest dataset and subsequently fine-tuned with the forestry pest dataset. A comparative analysis was performed between the proposed method and other mainstream target detection approaches, including YOLOv4-Tiny, YOLOv6, YOLOv7, YOLOv8, and Faster RCNN. The experimental results demonstrated impressive performance in detecting 31 types of forestry pests, achieving a detection precision of 98.1%, recall of 97.5%, and mAP@.5:.95 of 88.1%. Significantly, our method outperforms all the compared target detection methods, showcasing a minimum improvement of 2.1% in mAP@.5:.95. The model has shown robustness and effectiveness in accurately detecting various pests.
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Yiqi, Zhenhao Liu, Hehua Zhao, et al. "YOLO-YSTs: An Improved YOLOv10n-Based Method for Real-Time Field Pest Detection." Agronomy 15, no. 3 (2025): 575. https://doi.org/10.3390/agronomy15030575.

Full text
Abstract:
The use of yellow sticky traps is a green pest control method that utilizes the pests’ attraction to the color yellow. The use of yellow sticky traps not only controls pest populations but also enables monitoring, offering a more economical and environmentally friendly alternative to pesticides. However, the small size and dense distribution of pests on yellow sticky traps lead to lower detection accuracy when using lightweight models. On the other hand, large models suffer from longer training times and deployment difficulties, posing challenges for pest detection in the field using edge computing platforms. To address these issues, this paper proposes a lightweight detection method, YOLO-YSTs, based on an improved YOLOv10n model. The method aims to balance pest detection accuracy and model size and has been validated on edge computing platforms. This model incorporates SPD-Conv convolutional modules, the iRMB inverted residual block attention mechanism, and the Inner-SIoU loss function to improve the YOLOv10n network architecture, ultimately addressing the issues of missed and false detections for small and overlapping targets while balancing model speed and accuracy. Experimental results show that the YOLO-YSTs model achieved precision, recall, mAP50, and mAP50–95 values of 83.2%, 83.2%, 86.8%, and 41.3%, respectively, on the yellow sticky trap dataset. The detection speed reached 139 FPS, with GFLOPs at only 8.8. Compared with the YOLOv10n model, the mAP50 improved by 1.7%. Compared with other mainstream object detection models, YOLO-YSTs also achieved the best overall performance. Through improvements to the YOLOv10n model, the accuracy of pest detection on yellow sticky traps was effectively enhanced, and the model demonstrated good detection performance when deployed on edge mobile platforms. In conclusion, the proposed YOLO-YSTs model offers more balanced performance in the detection of pest images on yellow sticky traps. It performs well when deployed on edge mobile platforms, making it of significant importance for field pest monitoring and integrated pest management.
APA, Harvard, Vancouver, ISO, and other styles
11

Xiang, Qiuchi, Xiaoning Huang, Zhouxu Huang, Xingming Chen, Jintao Cheng, and Xiaoyu Tang. "Yolo-Pest: An Insect Pest Object Detection Algorithm via CAC3 Module." Sensors 23, no. 6 (2023): 3221. http://dx.doi.org/10.3390/s23063221.

Full text
Abstract:
Insect pests have always been one of the main hazards affecting crop yield and quality in traditional agriculture. An accurate and timely pest detection algorithm is essential for effective pest control; however, the existing approach suffers from a sharp performance drop when it comes to the pest detection task due to the lack of learning samples and models for small pest detection. In this paper, we explore and study the improvement methods of convolutional neural network (CNN) models on the Teddy Cup pest dataset and further propose a lightweight and effective agricultural pest detection method for small target pests, named Yolo-Pest, for the pest detection task in agriculture. Specifically, we tackle the problem of feature extraction in small sample learning with the proposed CAC3 module, which is built in a stacking residual structure based on the standard BottleNeck module. By applying a ConvNext module based on the vision transformer (ViT), the proposed method achieves effective feature extraction while keeping a lightweight network. Comparative experiments prove the effectiveness of our approach. Our proposal achieves 91.9% mAP0.5 on the Teddy Cup pest dataset, which outperforms the Yolov5s model by nearly 8% in mAP0.5. It also achieves great performance on public datasets, such as IP102, with a great reduction in the number of parameters.
APA, Harvard, Vancouver, ISO, and other styles
12

Srilekha, N., V. Tejaswini, M. Sneha, Abdul Aas Shaik, Sohail Zahid, and Zaheer Shaik. "Deep Learning for Pest Detection and Extraction." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem42777.

Full text
Abstract:
Pest infestations pose a significant challenge to agriculture, resulting in substantial crop damage and economic losses. Traditional pest detection systems primarily rely on Convolutional Neural Networks (CNNs) for image classification. While CNNs are effective at categorizing images and identifying pests, they face limitations in handling scenarios involving multiple pests, varying orientations, and complex backgrounds. Additionally, CNNs lack the ability to localize pests within images, providing only image-level classifications rather than detailed spatial information. To address these limitations, the proposed system introduces YOLOv5, a state-of-the-art object detection model. Unlike CNN-based approaches, YOLOv5 excels in detecting and localizing multiple pests in real-time, even in challenging conditions. By producing both bounding boxes and class labels, YOLOv5 offers precise localization and identification of pests, enabling targeted pest management. Its real-time detection capabilities and superior accuracy in complex environments make it a powerful tool for agricultural applications. The transition from CNNs to YOLOv5 brings several advantages, including enhanced detection performance, the ability to identify multiple pests in a single frame, and scalability across diverse datasets and environmental conditions.
APA, Harvard, Vancouver, ISO, and other styles
13

Wanbo Luo. "Pest-YOLO: A YOLOv5-Based Lightweight Crop Pest Detection Algorithm." International Journal of Engineering and Technology Innovation 15, no. 1 (2024): 11–25. https://doi.org/10.46604/ijeti.2024.13748.

Full text
Abstract:
Traditional crop pest detection methods face the challenge of numerous parameters and computations, making it difficult to deploy on embedded devices with limited resources. Consequently, a lightweight network is an effective solution to this issue. Based on you only look once (YOLO)v5, this paper aims to design and validate a lightweight and effective pest detector called pest-YOLO. First, a random background augmentation method is proposed to reduce the prediction error rate. Furthermore, a MobileNetV3-light backbone replaces the YOLOv5n backbone to reduce parameters and computations. Finally, the Convolutional Block Attention Module (CBAM) is integrated into the new network to compensate for the reduction in accuracy. Compared to the YOLOv5n model, the pest-YOLO model’s Parameters and Giga Floating Point Operations (GFLOPs) decrease by about 33% and 52.5% significantly, and the Frames per Second (FPS) increase by approximately 11.1%. In contrast, the Mean Average Precision (mAP50) slightly declines by 2.4%, from 92.7% to 90.3%.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Xueyan, Dandan Li, Yancheng Zheng, et al. "A YOLO-Based Model for Detecting Stored-Grain Insects on Surface of Grain Bulks." Insects 16, no. 2 (2025): 210. https://doi.org/10.3390/insects16020210.

Full text
Abstract:
Accurate, rapid, and intelligent stored-grain insect detection and counting are important for integrated pest management (IPM). Existing stored-grain insect pest detection models are often not suitable for detecting tiny insects on the surface of grain bulks and often require high computing resources and computational memory. Therefore, this study presents a YOLO-SGInsects model based on YOLOv8s for tiny stored-grain insect detection on the surface of grain bulk by adding a tiny object detection layer (TODL), adjusting the neck network with an asymptotic feature pyramid network (AFPN), and incorporating a hybrid attention transformer (HAT) module into the backbone network. The YOLO-SGInsects model was trained and tested using a GrainInsects dataset with images captured from granaries and laboratory. Experiments on the test set of the GrainInsects dataset showed that the YOLO-SGInsects achieved a stored-grain insect pest detection mean average precision (mAP) of 94.2%, with a counting root mean squared error (RMSE) of 0.7913, representing 2.0% and 0.3067 improvement over the YOLOv8s, respectively. Compared to other mainstream approaches, the YOLO-SGInsects model achieves better detection and counting performance and is capable of effectively handling tiny stored-grain insect pest detection in grain bulk surfaces. This study provides a technical basis for detecting and counting common stored-grain insect pests on the surface of grain bulk.
APA, Harvard, Vancouver, ISO, and other styles
15

Pazhanivelan, Sellaperumal, K. P. Ragunath, N. S. Sudarmanian, S. Satheesh, and P. Shanmugapriya. "Deep Learning-Based Multi-Class Pest and Disease Detection in Agricultural Fields." Journal of Scientific Research and Reports 31, no. 1 (2025): 538–46. https://doi.org/10.9734/jsrr/2025/v31i12797.

Full text
Abstract:
Farmers and agricultural workers would manually inspect crops for signs of pests or use traps to monitor pest populations. The advent of deep learning algorithms such as vision transformers and FastAI ResNet has brought about a significant transformation in pest detection practices. These advanced algorithms leverage the capabilities of artificial intelligence to process vast amounts of data and learn intricate patterns associated with different pest species and their impact on crops. Unlike manual methods, deep learning algorithms can analyze large datasets quickly and accurately, leading to more efficient and effective pest detection. Vision transformers and FastAI ResNet stand out for their ability to continuously learn and adapt to new data, including changes in pest populations over time. This adaptability is crucial in agriculture, where pest dynamics can vary due to factors like climate conditions, environmental changes, and pest control interventions. FastAI ResNet-50 and Vision Transformers have demonstrated remarkable accuracy in classifying various disease classes, indicating their reliability and precision in detecting different pests and diseases affecting crops. Their high accuracies, ranging from 0.95 to 1.00, underscore their effectiveness in agricultural pest detection tasks. However, the study highlights challenges that arise when dealing with more classes in a classification task. Factors such as increased complexity, imbalanced data distributions, and higher-dimensional feature spaces can impact model accuracy. To address these challenges, the study recommends various strategies, including data augmentation, class balancing, robust model architectures, regularization techniques, and transfer learning. Implementing these strategies can help maintain or improve accuracy levels, ensuring that deep learning models remain effective and reliable for agricultural pest detection and disease management applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Xuqi, Shanwen Zhang, Xianfeng Wang, and Cong Xu. "Crop pest detection by three-scale convolutional neural network with attention." PLOS ONE 18, no. 6 (2023): e0276456. http://dx.doi.org/10.1371/journal.pone.0276456.

Full text
Abstract:
Crop pests seriously affect the yield and quality of crop. To timely and accurately control crop pests is particularly crucial for crop security, quality of life and a stable agricultural economy. Crop pest detection in field is an essential step to control the pests. The existing convolutional neural network (CNN) based pest detection methods are not satisfactory for small pest recognition and detection in field because the pests are various with different colors, shapes and poses. A three-scale CNN with attention (TSCNNA) model is constructed for crop pest detection by adding the channel attention and spatial mechanisms are introduced into CNN. TSCNNA can improve the interest of CNN for pest detection with different sizes under complicated background, and enlarge the receptive field of CNN, so as to improve the accuracy of pest detection. Experiments are carried out on the image set of common crop pests, and the precision is 93.16%, which is 5.1% and 3.7% higher than ICNN and VGG16, respectively. The results show that the proposed method can achieve both high speed and high accuracy of crop pest detection. This proposed method has certain practical significance of real-time crop pest control in the field.
APA, Harvard, Vancouver, ISO, and other styles
17

Yue, Guangbo, Yaqiu Liu, Tong Niu, et al. "GLU-YOLOv8: An Improved Pest and Disease Target Detection Algorithm Based on YOLOv8." Forests 15, no. 9 (2024): 1486. http://dx.doi.org/10.3390/f15091486.

Full text
Abstract:
In the contemporary context, pest detection is progressively moving toward automation and intelligence. However, current pest detection algorithms still face challenges, such as lower accuracy and slower operation speed in detecting small objects. To address this issue, this study presents a crop pest target detection algorithm, GLU-YOLOv8, designed for complex scenes based on an enhanced version of You Only Look Once version 8 (YOLOv8). The algorithm introduces the SCYLLA-IOU (SIOU) loss function, which enhances the model generalization to various pest sizes and shapes by ensuring smoothness and reducing oscillations during training. Additionally, the algorithm incorporates the Convolutional Block Attention Module (CBAM) and Locality Sensitive Kernel (LSK) attention mechanisms to boost the pest target features. A novel Gated Linear Unit CONV (GLU-CONV) is also introduced to enhance the model’s perceptual and generalization capabilities while maintaining performance. Furthermore, GLU-YOLOv8 includes a small-object detection layer with a feature map size of 160 × 160 to extract more features of small-target pests, thereby improving detection accuracy and enabling more precise localization and identification of small-target pests. The study conducted a comparative analysis between the GLU-YOLOv8 model and other models, such as YOLOv8, Faster RCNN, and RetinaNet, to evaluate detection accuracy and precision. In the Scolytidae forestry pest dataset, GLU-YOLOv8 demonstrated an improvement of 8.2% in mAP@0.50 for small-target detection compared to the YOLOv8 model, with a resulting mAP@0.50 score of 97.4%. Specifically, on the IP102 dataset, GLU-YOLOv8 outperforms the YOLOv8 model with a 7.1% increase in mAP@0.50 and a 5% increase in mAP@0.50:0.95, reaching 58.7% for mAP@0.50. These findings highlight the significant enhancement in the accuracy and recognition rate of small-target detection achieved by GLU-YOLOv8, along with its efficient operational performance. This research provides valuable insights for optimizing small-target detection models for various pests and diseases.
APA, Harvard, Vancouver, ISO, and other styles
18

Sushma D S, Mohammed Alqhama, Aravind M, Jayanth A B, and Rakshith Kumar K. "Pest Detection and Classification in Peanut Crops." International Research Journal on Advanced Engineering and Management (IRJAEM) 2, no. 05 (2024): 1372–79. http://dx.doi.org/10.47392/irjaem.2024.0189.

Full text
Abstract:
Recent advancements in image processing have significantly improved pest detection and classification in peanut crops. Our study introduces an innovative approach that optimizes image features for accurate pest identification. Leveraging insights from successful image analysis methodologies, our model employs a tailored architecture for pest detection, segmentation, and classification tasks. By integrating dual branch segment representations and a dual-layer transformer encoder, we aim to enhance image representations and consolidate pest image segments of varying sizes. We evaluate our approach using three distinct pest datasets—Aphids, Wireworm, and Gram Caterpillar—ensuring comprehensive analysis and model validation. Prior to training, we preprocess the datasets extensively, employing feature extraction techniques and addressing image quality issues. We then apply normalization procedures to standardize the data for seamless integration into our model architecture. Our methodology focuses on extracting key features through self-attention mechanisms and standardized scaling processes to enhance predictive capabilities. Comprehensive experimentation demonstrates the superiority of our approach, outperforming established benchmarks in pest detection and classification with high accuracy rates. In summary, our study presents a novel framework that optimizes feature extraction and enhances predictive accuracy in pest detection and classification for peanut crops, addressing the unique challenges of agricultural pest identification.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Kai-Run, Li-Jun Duan, Yang-Jun Deng, Jin-Ling Liu, Chen-Feng Long, and Xin-Hui Zhu. "Pest Detection Based on Lightweight Locality-Aware Faster R-CNN." Agronomy 14, no. 10 (2024): 2303. http://dx.doi.org/10.3390/agronomy14102303.

Full text
Abstract:
Accurate and timely monitoring of pests is an effective way to minimize the negative effects of pests in agriculture. Since deep learning-based methods have achieved good performance in object detection, they have been successfully applied for pest detection and monitoring. However, the current pest detection methods fail to balance the relationship between computational cost and model accuracy. Therefore, this paper proposes a lightweight, locality-aware faster R-CNN (LLA-RCNN) method for effective pest detection and real-time monitoring. The proposed model uses MobileNetV3 to replace the original backbone, reduce the computational complexity, and compress the size of the model to speed up pest detection. The coordinate attention (CA) blocks are utilized to enhance the locality information for highlighting the objects under complex backgrounds. Furthermore, the generalized intersection over union (GIoU) loss function and region of interest align (RoI Align) technology are used to improve pest detection accuracy. The experimental results on different types of datasets validate that the proposed model not only significantly reduces the number of parameters and floating-point operations (FLOPs), but also achieves better performance than some popular pest detection methods. This demonstrates strong generalization capabilities and provides a feasible method for pest detection on resource-constrained devices.
APA, Harvard, Vancouver, ISO, and other styles
20

Khalid, Saim, Hadi Mohsen Oqaibi, Muhammad Aqib, and Yaser Hafeez. "Small Pests Detection in Field Crops Using Deep Learning Object Detection." Sustainability 15, no. 8 (2023): 6815. http://dx.doi.org/10.3390/su15086815.

Full text
Abstract:
Deep learning algorithms, such as convolutional neural networks (CNNs), have been widely studied and applied in various fields including agriculture. Agriculture is the most important source of food and income in human life. In most countries, the backbone of the economy is based on agriculture. Pests are one of the major challenges in crop production worldwide. To reduce the overall production and economic loss from pests, advancement in computer vision and artificial intelligence may lead to early and small pest detection with greater accuracy and speed. In this paper, an approach for early pest detection using deep learning and convolutional neural networks has been presented. Object detection is applied on a dataset with images of thistle caterpillars, red beetles, and citrus psylla. The input dataset contains 9875 images of all the pests under different illumination conditions. State-of-the-art Yolo v3, Yolov3-Tiny, Yolov4, Yolov4-Tiny, Yolov6, and Yolov8 have been adopted in this study for detection. All of these models were selected based on their performance in object detection. The images were annotated in the Yolo format. Yolov8 achieved the highest mAP of 84.7% with an average loss of 0.7939, which is better than the results reported in other works when compared to small pest detection. The Yolov8 model was further integrated in an Android application for real time pest detection. This paper contributes the implementation of novel deep learning models, analytical methodology, and a workflow to detect pests in crops for effective pest management.
APA, Harvard, Vancouver, ISO, and other styles
21

Sun, Daozong, Kai Zhang, Hongsheng Zhong, et al. "Efficient Tobacco Pest Detection in Complex Environments Using an Enhanced YOLOv8 Model." Agriculture 14, no. 3 (2024): 353. http://dx.doi.org/10.3390/agriculture14030353.

Full text
Abstract:
Due to the challenges of pest detection in complex environments, this research introduces a lightweight network for tobacco pest identification leveraging enhancements in YOLOv8 technology. Using YOLOv8 large (YOLOv8l) as the base, the neck layer of the original network is replaced with an asymptotic feature pyramid network (AFPN) network to reduce model parameters. A SimAM attention mechanism, which does not require additional parameters, is incorporated to improve the model’s ability to extract features. The backbone network’s C2f model is replaced with the VoV-GSCSP module to reduce the model’s computational requirements. Experiments show the improved YOLOv8 model achieves high overall performance. Compared to the original model, model parameters and GFLOPs are reduced by 52.66% and 19.9%, respectively, while mAP@0.5 is improved by 1%, recall by 2.7%, and precision by 2.4%. Further comparison with popular detection models YOLOv5 medium (YOLOv5m), YOLOv6 medium (YOLOv6m), and YOLOv8 medium (YOLOv8m) shows the improved model has the highest detection accuracy and lightest parameters for detecting four common tobacco pests, with optimal overall performance. The improved YOLOv8 detection model proposed facilitates precise, instantaneous pest detection and recognition for tobacco and other crops, securing high-accuracy, comprehensive pest identification.
APA, Harvard, Vancouver, ISO, and other styles
22

Miranda, Johnny L., Bobby D. Gerardo, and Bartolome T. Tanguilig III. "Pest Detection and Extraction Using Image Processing Techniques." International Journal of Computer and Communication Engineering 3, no. 3 (2014): 189–92. http://dx.doi.org/10.7763/ijcce.2014.v3.317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Wanqing, and Haoyue Fu. "A Lightweight Crop Pest Detection Method Based on Improved RTMDet." Information 15, no. 9 (2024): 519. http://dx.doi.org/10.3390/info15090519.

Full text
Abstract:
To address the issues of low detection accuracy and large model parameters in crop pest detection in natural scenes, this study improves the deep learning object detection model and proposes a lightweight and accurate method RTMDet++ for crop pest detection. First, the real-time object detection network RTMDet is utilized to design the pest detection model. Then, the backbone and neck structures are pruned to reduce the number of parameters and computation. Subsequently, a shortcut connection module is added to the classification and regression branches, respectively, to enhance its feature learning capability, thereby improving its accuracy. Experimental results show that, compared to the original model RTMDet, the improved model RTMDet++ reduces the number of parameters by 15.5%, the computation by 25.0%, and improves the mean average precision by 0.3% on the crop pest dataset IP102. The improved model RTMDet++ achieves a mAP of 94.1%, a precision of 92.5%, and a recall of 92.7% with 4.117M parameters and 3.130G computations, outperforming other object detection methods. The proposed model RTMDet++ achieves higher performance with fewer parameters and computations, which can be applied to crop pest detection in practice and aids in pest control research.
APA, Harvard, Vancouver, ISO, and other styles
24

M, S. Aishwarya, Karthik K, Nandan D, and Rachana N. "A Survey on Pest Detection Systems." Perspectives in Communication, Embedded-systems and Signal-processing - PiCES 6, no. 3 (2022): 14–15. https://doi.org/10.5281/zenodo.6969876.

Full text
Abstract:
Agriculture is backbone of India. With the growth in technology, several implements are being designed and developed to help the farmers to get better yields. Be it harvesting machines or sowing and tilling machines, major players of the industry are giving their best to develop innovative products for these farmers. Similarly, research is being conducted to identify pests and eliminate them. A survey on such methods is discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
25

Shi, Wenxiu and Li Nianqiang. "APPLICATION OF TARGET DETECTION ALGORITHM BASED ON DEEP LEARNING IN FARMLAND PEST RECOGNITION." International Journal of Artificial Intelligence & Applications (IJAIA) 11, May (2020): 1–10. https://doi.org/10.5281/zenodo.3889762.

Full text
Abstract:
Combining with deep learning technology, this paper proposes a method of farmland pest recognition based on target detection algorithm, which realizes the automatic recognition of farmland pest and improves the recognition accuracy. First of all, a labeled farm pest database is established; then uses Faster R-CNN algorithm, the model uses the improved Inception network for testing; finally, the proposed target detection model is trained and tested on the farm pest database, with the average precision up to 90.54%.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhao, Zikun, Sai Xu, Huazhong Lu, Xin Liang, Hongli Feng, and Wenjing Li. "Nondestructive Detection of Litchi Stem Borers Using Multi-Sensor Data Fusion." Agronomy 14, no. 11 (2024): 2691. http://dx.doi.org/10.3390/agronomy14112691.

Full text
Abstract:
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, as they often fail to capture both external and internal fruit characteristics. By integrating multiple sensors, our approach overcomes these limitations, offering a more accurate and robust detection system. Significant differences were observed between pest-free and infested lychees. Pest-free lychees exhibited higher hardness, soluble sugars (11% higher in flesh, 7% higher in peel), vitamin C (50% higher in flesh, 2% higher in peel), polyphenols, anthocyanins, and ORAC values (26%, 9%, and 14% higher, respectively). The Vis/NIR data processed with SG+SNV+CARS yielded a partial least squares regression (PLSR) model with an R2 of 0.82, an RMSE of 0.18, and accuracy of 89.22%. The hyperspectral model, using SG+MSC+SPA, achieved an R2 of 0.69, an RMSE of 0.23, and 81.74% accuracy, while the X-ray method with support vector regression (SVR) reached an R2 of 0.69, an RMSE of 0.22, and 76.25% accuracy. Through feature-level fusion, Recursive Feature Elimination with Cross-Validation (RFECV), and dimensionality reduction using PCA, we optimized hyperparameters and developed a Random Forest model. This model achieved 92.39% accuracy in pest detection, outperforming the individual methods by 3.17%, 10.25%, and 16.14%, respectively. The multi-source fusion approach also improved the overall accuracy by 4.79%, highlighting the critical role of sensor fusion in enhancing pest detection and supporting the development of automated non-destructive systems for lychee stem borer detection.
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Junwei, Shihao Chen, Nan Liu, Fupin Zhai, and Quan Pan. "Cascaded Aggregation Convolution Network for Salient Grain Pests Detection." Insects 15, no. 7 (2024): 557. http://dx.doi.org/10.3390/insects15070557.

Full text
Abstract:
Pest infestation poses significant threats to grain storage due to pests’ behaviors of feeding, respiration, excretion, and reproduction. Efficient pest detection and control are essential to mitigate these risks. However, accurate detection of small grain pests remains challenging due to their small size, high variability, low contrast, and cluttered background. Salient pest detection focuses on the visual features that stand out, improving the accuracy of pest identification in complex environments. Drawing inspiration from the rapid pest recognition abilities of humans and birds, we propose a novel Cascaded Aggregation Convolution Network (CACNet) for pest detection and control in stored grain. Our approach aims to improve detection accuracy by employing a reverse cascade feature aggregation network that imitates the visual attention mechanism in humans when observing and focusing on objects of interest. The CACNet uses VGG16 as the backbone network and incorporates two key operations, namely feature enhancement and feature aggregation. These operations merge the high-level semantic information and low-level positional information of salient objects, enabling accurate segmentation of small-scale grain pests. We have curated the GrainPest dataset, comprising 500 images showcasing zero to five or more pests in grains. Leveraging this dataset and the MSRA-B dataset, we validated our method’s efficacy, achieving a structure S-measure of 91.9%, and 90.9%, and a weighted F-measure of 76.4%, and 91.0%, respectively. Our approach significantly surpasses the traditional saliency detection methods and other state-of-the-art salient object detection models based on deep learning. This technology shows great potential for pest detection and assessing the severity of pest infestation based on pest density in grain storage facilities. It also holds promise for the prevention and control of pests in agriculture and forestry.
APA, Harvard, Vancouver, ISO, and other styles
28

Valderrama Solis, Manuel Alejandro, Javier Valenzuela Nina, German Alberto Echaiz Espinoza, et al. "Innovative Machine Learning and Image Processing Methodology for Enhanced Detection of Aleurothrixus Floccosus." Electronics 14, no. 2 (2025): 358. https://doi.org/10.3390/electronics14020358.

Full text
Abstract:
This paper presents a methodology for detecting the pest Aleurothrixus floccosus in citrus crops in Pedregal de Arequipa, Peru. The study employs simple random sampling during image collection to minimize bias, alternating and extracting leaves from different citrus trees. Image processing techniques, including noise reduction, edge smoothing, and segmentation, are applied for pest detection. Machine learning algorithms are used to classify the images, culminating in a robust detection methodology. A dataset of 1200 images was analyzed during the study.
APA, Harvard, Vancouver, ISO, and other styles
29

Guo, Qingwen, Chuntao Wang, Deqin Xiao, and Qiong Huang. "An Enhanced Insect Pest Counter Based on Saliency Map and Improved Non-Maximum Suppression." Insects 12, no. 8 (2021): 705. http://dx.doi.org/10.3390/insects12080705.

Full text
Abstract:
Accurately counting the number of insect pests from digital images captured on yellow sticky traps remains a challenge in the field of insect pest monitoring. In this study, we develop a new approach to counting the number of insect pests using a saliency map and improved non-maximum suppression. Specifically, as the background of a yellow sticky trap is simple and the insect pest object is small, we exploit a saliency map to construct a region proposal generator including saliency map building, activation region formation, background–foreground classifier, and tune-up boxes involved in region proposal generation. For each region proposal, a convolutional neural network (CNN) model is used to classify it as a specific insect pest class, resulting in detection bounding boxes. By considering the relationship between detection bounding boxes, we thus develop an improved non-maximum suppression to sophisticatedly handle the redundant detection bounding boxes and obtain the insect pest number through counting the handled detection bounding boxes, each of which covers one insect pest. As this insect pest counter may miscount insect pests that are close to each other, we further integrate the widely used Faster R-CNN with the mentioned insect pest counter to construct a dual-path network. Extensive experimental simulations show that the two proposed insect pest counters achieve significant improvement in terms of F1 score against the state-of-the-art object detectors as well as insect pest detection methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Kunhong, Yi Li, Xuan Wen, et al. "Sticky Trap-Embedded Machine Vision for Tea Pest Monitoring: A Cross-Domain Transfer Learning Framework Addressing Few-Shot Small Target Detection." Agronomy 15, no. 3 (2025): 693. https://doi.org/10.3390/agronomy15030693.

Full text
Abstract:
Pest infestations have always been a major factor affecting tea production. Real-time detection of tea pests using machine vision is a mainstream method in modern agricultural pest control. Currently, there is a notable absence of machine vision devices capable of real-time monitoring for small-sized tea pests in the market, and the scarcity of open-source datasets available for tea pest detection remains a critical limitation. This manuscript proposes a YOLOv8-FasterTea pest detection algorithm based on cross-domain transfer learning, which was successfully deployed in a novel tea pest monitoring device. The proposed method leverages transfer learning from the natural language character domain to the tea pest detection domain, termed cross-domain transfer learning, which is based on the complex and small characteristics shared by natural language characters and tea pests. With sufficient samples in the language character domain, transfer learning can effectively enhance the tiny and complex feature extraction capabilities of deep networks in the pest domain and mitigate the few-shot learning problem in tea pest detection. The information and texture features of small tea pests are more likely to be lost with the layers of a neural network becoming deep. Therefore, the proposed method, YOLOv8-FasterTea, removes the P5 layer and adds a P2 small target detection layer based on the YOLOv8 model. Additionally, the original C2f module is replaced with lighter convolutional modules to reduce the loss of information about small target pests. Finally, this manuscript successfully applies the algorithm to outdoor pest monitoring equipment. Experimental results demonstrate that, on a small sample yellow board pest dataset, the mAP@.5 value of the model increased by approximately 6%, on average, after transfer learning. The YOLOv8-FasterTea model improved the mAP@.5 value by 3.7%, while the model size was reduced by 46.6%.
APA, Harvard, Vancouver, ISO, and other styles
31

Bastian, Ade, Adie Iman Nurzaman, Tri Ferga Prasetyo, and Sri Fatimah. "Roselle Pest Detection and Classification Using Threshold and Template Matching." Journal of Image and Graphics 11, no. 4 (2023): 330–42. http://dx.doi.org/10.18178/joig.11.4.330-342.

Full text
Abstract:
Roselle is a fiber-producing plant that has broad benefits for health food, so many farmers are interested in starting to cultivate it. This study aims to design a rosella plant pest detection system to reduce the risk of crop failure or reduced yields of rosella calyx. The design of a system for detecting and classifying rosella pests uses the threshold method as a digital image processing method connected via the internet with information media applications and template matching to detect and classify pests on rosella plants. Detection of pests on rosella plants has been successfully built using a detection system using thresholding and template matching methods. Datasets of rosella plant pests that are not yet widely available encourage the detection of rosella plant pests with datasets from rosella plant objects and limited data testing. Testing with 75% accuracy, the detection process is affected by light and camera quality.
APA, Harvard, Vancouver, ISO, and other styles
32

Xiong, Peng, Cong Zhang, Linfeng He, Xiaoyun Zhan, and Yuantao Han. "Deep learning-based rice pest detection research." PLOS ONE 19, no. 11 (2024): e0313387. http://dx.doi.org/10.1371/journal.pone.0313387.

Full text
Abstract:
With the increasing pressure on global food security, the effective detection and management of rice pests have become crucial. Traditional pest detection methods are not only time-consuming and labor-intensive but also often fail to achieve real-time monitoring and rapid response. This study aims to address the issue of rice pest detection through deep learning techniques to enhance agricultural productivity and sustainability. The research utilizes the IP102 large-scale rice pest benchmark dataset, publicly released by CVPR in 2019, which includes 9,663 images of eight types of pests, with a training-to-testing ratio of 8:2. By optimizing the YOLOv8 model, incorporating the CBAM (Convolutional Block Attention Module) attention mechanism, and the BiFPN (Bidirectional Feature Pyramid Network) for feature fusion, the detection accuracy in complex agricultural environments was significantly improved. Experimental results show that the improved YOLOv8 model achieved mAP@0.5 and mAP@0.5:0.95 scores of 98.8% and 78.6%, respectively, representing increases of 2.8% and 2.35% over the original model. This study confirms the potential of deep learning technology in the field of pest detection, providing a new technological approach for future agricultural pest management.
APA, Harvard, Vancouver, ISO, and other styles
33

Carnegie, Angus J., and Helen F. Nahrung. "Post-Border Forest Biosecurity in Australia: Response to Recent Exotic Detections, Current Surveillance and Ongoing Needs." Forests 10, no. 4 (2019): 336. http://dx.doi.org/10.3390/f10040336.

Full text
Abstract:
Assessing exotic pest response and eradication programs can identify factors that will lead to increased pest detection and provide information for prioritizing and enhancing future eradication attempts. We review the forest-related insect and pathogen detections and responses in Australia between 1996 and 2017. Thirty-four detections of new exotic forest species were made in this timeframe; seventeen each of insects and pathogens. Twenty-nine of the species are now established in mainland Australia and another in the Torres Strait. Four of the established species cause high impact, and three of these were subject to failed eradication programs. Two of the four established high-impact species were not previously recognised as threats; indeed, 85% of all new detections were not considered high-priority risks. Only one forest pest has been successfully eradicated, suggesting a lower success rate of Australian forest eradication programs than the world average. Most of these exotic pests and pathogens were not detected early enough to attempt eradication, or they were not deemed a significant enough pest to warrant an eradication attempt. Early detection is key to successful eradication. We discuss current surveillance programs in Australia and the methods (general, specific), locations (urban, regional, amenity, plantation, nursery, native forest), and surveillance type (public, industry, ad-hoc researcher, forest health surveillance, high-risk site surveillance, pest-specific trapping) that detections were made under. While there has been an increase in detections using specific surveillance since 2010, there remains a need for a structured national approach to forest biosecurity surveillance, preparedness, and responses.
APA, Harvard, Vancouver, ISO, and other styles
34

Sargunar Thomas, Jaya Christa, Suhidhana Manikandarajan, and Tinaga Kamalakkannan Subha. "AI based pest detection and alert system for farmers using IoT." E3S Web of Conferences 387 (2023): 05003. http://dx.doi.org/10.1051/e3sconf/202338705003.

Full text
Abstract:
Agriculture plays an important role in economy and it is the backbone of the economic system for developing countries. India is one of the key players in agricultural precinct worldwide. Although there are many sophisticated technologies in the field of agriculture, still there is no proper technology to control the problems related to pests. Disinclination to pesticides for controlling agricultural pests is a worldwide problem. To overcome this particular problem, an AI based pest detection model is designed. The purpose of this model is to further illustrate, through classification using an artificial neural network, the effectiveness of acoustic approaches in pest detection. Numerous types of research have demonstrated the viability of acoustic technologies for insect detection and monitoring using different sound parameterization and classification methods. IR sensors and sound sensor are employed to identify the presence of insects. Deep learning technique is used to analyse and categorize the audio signal with the help of AI model to detect the type of pest. This model not only aims on detecting the pest but also alerting the farmers by notifying through their mobile phones with the help of Wi-Fi module and IoT.
APA, Harvard, Vancouver, ISO, and other styles
35

Dong, Shifeng, Jianming Du, Lin Jiao, et al. "Automatic Crop Pest Detection Oriented Multiscale Feature Fusion Approach." Insects 13, no. 6 (2022): 554. http://dx.doi.org/10.3390/insects13060554.

Full text
Abstract:
Specialized pest control for agriculture is a high-priority agricultural issue. There are multiple categories of tiny pests, which pose significant challenges to monitoring. Previous work mainly relied on manual monitoring of pests, which was labor-intensive and time-consuming. Recently, deep-learning-based pest detection methods have achieved remarkable improvements and can be used for automatic pest monitoring. However, there are two main obstacles in the task of pest detection. (1) Small pests often go undetected because much information is lost during the network training process. (2) The highly similar physical appearances of some categories of pests make it difficult to distinguish the specific categories for networks. To alleviate the above problems, we proposed the multi-category pest detection network (MCPD-net), which includes a multiscale feature pyramid network (MFPN) and a novel adaptive feature region proposal network (AFRPN). MFPN can fuse the pest information in multiscale features, which significantly improves detection accuracy. AFRPN solves the problem of anchor and feature misalignment during RPN iterating, especially for small pest objects. In extensive experiments on the multi-category pests dataset 2021 (MPD2021), the proposed method achieved 67.3% mean average precision (mAP) and 89.3% average recall (AR), outperforming other deep learning-based models.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Shaohua, Dachuan Xu, Haojian Liang, et al. "Advances in Deep Learning Applications for Plant Disease and Pest Detection: A Review." Remote Sensing 17, no. 4 (2025): 698. https://doi.org/10.3390/rs17040698.

Full text
Abstract:
Traditional methods for detecting plant diseases and pests are time-consuming, labor-intensive, and require specialized skills and resources, making them insufficient to meet the demands of modern agricultural development. To address these challenges, deep learning technologies have emerged as a promising solution for the accurate and timely identification of plant diseases and pests, thereby reducing crop losses and optimizing agricultural resource allocation. By leveraging its advantages in image processing, deep learning technology has significantly enhanced the accuracy of plant disease and pest detection and identification. This review provides a comprehensive overview of recent advancements in applying deep learning algorithms to plant disease and pest detection. It begins by outlining the limitations of traditional methods in this domain, followed by a systematic discussion of the latest developments in applying various deep learning techniques—including image classification, object detection, semantic segmentation, and change detection—to plant disease and pest identification. Additionally, this study highlights the role of large-scale pre-trained models and transfer learning in improving detection accuracy and scalability across diverse crop types and environmental conditions. Key challenges, such as enhancing model generalization, addressing small lesion detection, and ensuring the availability of high-quality, diverse training datasets, are critically examined. Emerging opportunities for optimizing pest and disease monitoring through advanced algorithms are also emphasized. Deep learning technology, with its powerful capabilities in data processing and pattern recognition, has become a pivotal tool for promoting sustainable agricultural practices, enhancing productivity, and advancing precision agriculture.
APA, Harvard, Vancouver, ISO, and other styles
37

Yuan, Wenxia, Lingfang Lan, Jiayi Xu, et al. "Smart Agricultural Pest Detection Using I-YOLOv10-SC: An Improved Object Detection Framework." Agronomy 15, no. 1 (2025): 221. https://doi.org/10.3390/agronomy15010221.

Full text
Abstract:
Aiming at the problems of insufficient detection accuracy and high false detection rates of traditional pest detection models in the face of small targets and incomplete targets, this study proposes an improved target detection network, I-YOLOv10-SC. The network leverages Space-to-Depth Convolution to enhance its capability in detecting small insect targets. The Convolutional Block Attention Module is employed to improve feature representation and attention focus. Additionally, Shape Weights and Scale Adjustment Factors are introduced to optimize the loss function. The experimental results show that compared with the original YOLOv10, the model generated by the improved algorithm improves the accuracy by 5.88 percentage points, the recall rate by 6.67 percentage points, the balance score by 6.27 percentage points, the mAP value by 4.26 percentage points, the bounding box loss by 18.75%, the classification loss by 27.27%, and the feature point loss by 8%. The model oscillation has also been significantly improved. The enhanced I-YOLOv10-SC network effectively addresses the challenges of detecting small and incomplete insect targets in tea plantations, offering high precision and recall rates, thus providing a solid technical foundation for intelligent pest monitoring and precise prevention in smart tea gardens.
APA, Harvard, Vancouver, ISO, and other styles
38

J, S. Vandana Shree, and S. Ayaz Pasha. "Pest Detection And Obliteration Based Robotic System." Perspectives in Communication, Embedded-systems and Signal-processing - PiCES 6, no. 4 (2022): 23–24. https://doi.org/10.5281/zenodo.6969951.

Full text
Abstract:
Pest infestation is one of the major challenges by farmers. If these pests are not controlled, they can cause threat to harvest, thus generating a huge loss for the farmers. Thus, pesticides have to be supplied to plants. However, extensive usage of pesticide can cause adverse effects on a human’s life when the harvest enters the food chain. Thus in order to curb the extensive usage of pesticides, we propose a pest obliteration system that captures real time video of plants; if pests are found pesticide is sprayed, else the robotic prototype will keep moving until it find the pest on plant.
APA, Harvard, Vancouver, ISO, and other styles
39

J, S. Vandana Shree, and S. Ayaz Pasha. "Pest Detection and Obliteration Based Robotic System." Perspectives in Communication, Embedded-systems and Signal-processing - PiCES 5, no. 11 (2022): 107–9. https://doi.org/10.5281/zenodo.6331644.

Full text
Abstract:
Pest infestation is one of the major challenges by farmers. If these pests are not controlled, they can cause threat to harvest, thus generating a huge loss for the farmers. Thus, pesticides have to be supplied to plants. However, extensive usage of pesticide can cause adverse effects on a human’s life when the harvest enters the food chain. Thus in order to curb the extensive usage of pesticides, we propose a pest obliteration system that captures real time video of plants; if pests are found pesticide is sprayed, else the robotic prototype will keep moving until it find the pest on plant.
APA, Harvard, Vancouver, ISO, and other styles
40

Dai, Min, Md Mehedi Hassan Dorjoy, Hong Miao, and Shanwen Zhang. "A New Pest Detection Method Based on Improved YOLOv5m." Insects 14, no. 1 (2023): 54. http://dx.doi.org/10.3390/insects14010054.

Full text
Abstract:
Pest detection in plants is essential for ensuring high productivity. Convolutional neural networks (CNN)-based deep learning advancements recently have made it possible for researchers to increase object detection accuracy. In this study, pest detection in plants with higher accuracy is proposed by an improved YOLOv5m-based method. First, the SWin Transformer (SWinTR) and Transformer (C3TR) mechanisms are introduced into the YOLOv5m network so that they can capture more global features and can increase the receptive field. Then, in the backbone, ResSPP is considered to make the network extract more features. Furthermore, the global features of the feature map are extracted in the feature fusion phase and forwarded to the detection phase via a modification of the three output necks C3 into SWinTR. Finally, WConcat is added to the fusion feature, which increases the feature fusion capability of the network. Experimental results demonstrate that the improved YOLOv5m achieved 95.7% precision rate, 93.1% recall rate, 94.38% F1 score, and 96.4% Mean Average Precision (mAP). Meanwhile, the proposed model is significantly better than the original YOLOv3, YOLOv4, and YOLOv5m models. The improved YOLOv5m model shows greater robustness and effectiveness in detecting pests, and it could more precisely detect different pests from the dataset.
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Shuai, Ziyao Xing, Hengbin Wang, et al. "Maize-YOLO: A New High-Precision and Real-Time Method for Maize Pest Detection." Insects 14, no. 3 (2023): 278. http://dx.doi.org/10.3390/insects14030278.

Full text
Abstract:
The frequent occurrence of crop pests and diseases is one of the important factors leading to the reduction of crop quality and yield. Since pests are characterized by high similarity and fast movement, this poses a challenge for artificial intelligence techniques to identify pests in a timely and accurate manner. Therefore, we propose a new high-precision and real-time method for maize pest detection, Maize-YOLO. The network is based on YOLOv7 with the insertion of the CSPResNeXt-50 module and VoVGSCSP module. It can improve network detection accuracy and detection speed while reducing the computational effort of the model. We evaluated the performance of Maize-YOLO in a typical large-scale pest dataset IP102. We trained and tested against those pest species that are more damaging to maize, including 4533 images and 13 classes. The experimental results show that our method outperforms the current state-of-the-art YOLO family of object detection algorithms and achieves suitable performance at 76.3% mAP and 77.3% recall. The method can provide accurate and real-time pest detection and identification for maize crops, enabling highly accurate end-to-end pest detection.
APA, Harvard, Vancouver, ISO, and other styles
42

P, Venkatasaichandrakanth, and Iyapparaja M. "GROUNDNUT CROP PEST DETECTION AND CLASSIFICATION USING COMPREHENSIVE DEEP-LEARNING MODELS." Suranaree Journal of Science and Technology 31, no. 1 (2024): 020028(1–17). http://dx.doi.org/10.55766/sujst-2024-01-e02544.

Full text
Abstract:
Pests pose a significant threat to crops, leading to substantial economic losses and decreased food production. Early detection and accurate classification of pests in crops are crucial for effective pest management strategies. In this study, we propose a method for pest detection and classification in groundnut crops using deep learning models. In this research, we compare the performance of three deep learning models, namely Custom CNN [proposed], LeNet-5, and VGG-16, for groundnut pest detection and classification. A comprehensive dataset containing images of diverse groundnut crop pests, including thrips, aphids, armyworms, and wireworms, from the IP102 dataset was utilized for model evaluation. The performance is evaluated using reliability metrics such as accuracy and loss. These findings demonstrate the utility of deep learning models for reliable pest classification of groundnut crops. The Custom CNN [proposed] model demonstrates high training accuracy but potential overfitting, while the VGG-16 model performs well on both training and test data, showcasing its ability to generalize. The models’ accuracy in predicting pest species underscores their capability to capture and utilize visual patterns for precise classification. These findings underscore the potential of deep learning models, particularly the VGG-16 model, for pest detection and classification in groundnut crops. The knowledge gained from this study can contribute to the development of practical pest management strategies and aid in maintaining crop health and productivity. Further analysis and comparisons with other models are recommended to comprehensively evaluate the competitiveness and suitability of deep learning models in real-world applications of pest detection and classification in agricultural settings.
APA, Harvard, Vancouver, ISO, and other styles
43

Tang, Ke, Yurong Qian, Hualong Dong, et al. "SP-YOLO: A Real-Time and Efficient Multi-Scale Model for Pest Detection in Sugar Beet Fields." Insects 16, no. 1 (2025): 102. https://doi.org/10.3390/insects16010102.

Full text
Abstract:
Beet crops are highly vulnerable to pest infestations throughout their growth cycle, which significantly affects crop development and yield. Timely and accurate pest identification is crucial for implementing effective control measures. Current pest detection tasks face two primary challenges: first, pests frequently blend into their environment due to similar colors, making it difficult to capture distinguishing features in the field; second, pest images exhibit scale variations under different viewing angles, lighting conditions, and distances, which complicates the detection process. This study constructed the BeetPest dataset, a multi-scale pest dataset for beets in complex backgrounds, and proposed the SP-YOLO model, which is an improved real-time detection model based on YOLO11. The model integrates a CNN and transformer (CAT) into the backbone network to capture global features. The lightweight depthwise separable convolution block (DSCB) module is designed to extract multi-scale features and enlarge the receptive field. The neck utilizes the cross-layer path aggregation network (CLPAN) module, further merging low-level and high-level features. SP-YOLO effectively differentiates between the background and target, excelling in handling scale variations in pest images. In comparison with the original YOLO11 model, SP-YOLO shows a 4.9% improvement in mean average precision (mAP@50), a 9.9% increase in precision, and a 1.3% rise in average recall. Furthermore, SP-YOLO achieves a detection speed of 136 frames per second (FPS), meeting real-time pest detection requirements. The model demonstrates remarkable robustness on other pest datasets while maintaining a manageable parameter size and computational complexity suitable for edge devices.
APA, Harvard, Vancouver, ISO, and other styles
44

Guan, Bolun, Yaqian Wu, Jingbo Zhu, Juanjuan Kong, and Wei Dong. "GC-Faster RCNN: The Object Detection Algorithm for Agricultural Pests Based on Improved Hybrid Attention Mechanism." Plants 14, no. 7 (2025): 1106. https://doi.org/10.3390/plants14071106.

Full text
Abstract:
Pest infestations remain a critical threat to global agriculture, significantly compromising crop yield and quality. While accurate pest detection forms the foundation of precision pest management, current approaches face two primary challenges: (1) the scarcity of comprehensive multi-scale, multi-category pest datasets and (2) performance limitations in detection models caused by substantial target scale variations and high inter-class morphological similarity. To address these issues, we present three key contributions: First, we introduce Insect25—a novel agricultural pest detection dataset containing 25 distinct pest categories, comprising 18,349 high-resolution images. This dataset specifically addresses scale diversity through multi-resolution acquisition protocols, significantly enriching feature distribution for robust model training. Second, we propose GC-Faster RCNN, an enhanced detection framework integrating a hybrid attention mechanism that synergistically combines channel-wise correlations and spatial dependencies. This dual attention design enables more discriminative feature extraction, which is particularly effective for distinguishing morphologically similar pest species. Third, we implement an optimized training strategy featuring a cosine annealing scheduler with linear warm-up, accelerating model convergence while maintaining training stability. Experiments have shown that compared with the original Faster RCNN model, GC-Faster RCNN has improved the average accuracy mAP0.5 on the Insect25 dataset by 4.5 percentage points, and mAP0.75 by 20.4 percentage points, mAP0.5:0.95 increased by 20.8 percentage points, and the recall rate increased by 16.6 percentage points. In addition, experiments have also shown that the GC-Faster RCNN detection method can reduce interference from multiple scales and high similarity between categories, improving detection performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Cheng, Zekai, Rongqing Huang, Rong Qian, Wei Dong, Jingbo Zhu, and Meifang Liu. "A Lightweight Crop Pest Detection Method Based on Convolutional Neural Networks." Applied Sciences 12, no. 15 (2022): 7378. http://dx.doi.org/10.3390/app12157378.

Full text
Abstract:
Existing object detection methods with many parameters and computations are not suitable for deployment on devices with poor performance in agricultural environments. Therefore, this study proposes a lightweight crop pest detection method based on convolutional neural networks, named YOLOLite-CSG. The basic architecture of the method is derived from a simplified version of YOLOv3, namely YOLOLite, and k-means++ is utilized to improve the generation process of the prior boxes. In addition, a lightweight sandglass block and coordinate attention are used to optimize the structure of residual blocks. The method was evaluated on the CP15 crop pest dataset. Its detection precision exceeds that of YOLOv3, at 82.9%, while the number of parameters is 5 million, only 8.1% of the number used by YOLOv3, and the number of computations is 9.8 GFLOPs, only 15% of that used by YOLOv3. Furthermore, the detection precision of the method is superior to all other commonly used object detection methods evaluated in this study, with a maximum improvement of 10.6%, and it still has a significant edge in the number of parameters and computation required. The method has excellent pest detection precision with extremely few parameters and computations. It is well-suited to be deployed on equipment for detecting crop pests in agricultural environments.
APA, Harvard, Vancouver, ISO, and other styles
46

As'ad, Avif, Suroso Suroso, Ciksadan Ciksadan, and Erni Hawayanti. "Penerapan Algoritma Yolov3 pada Sistem Cerdas Pendeteksi dan Pengendali Hama Bawang Merah Berbasis IoT." Building of Informatics, Technology and Science (BITS) 6, no. 2 (2024): 930–39. https://doi.org/10.47065/bits.v6i2.5697.

Full text
Abstract:
Technological advancements play a crucial role in enhancing the efficiency of modern agriculture, particularly in addressing pest management challenges. This study focuses on the development of an automatic pest detection system for shallot crops using a combination of Arduino Uno microcontroller, ESP32-CAM camera module, and YOLOv3 object detection model. The system is designed to detect pests in real-time through images captured by ESP32-CAM and analyzed using YOLOv3, then provide an automatic response by spraying pesticides only in areas where pests are detected. The study began with the development of hardware and software for the automatic pest detection system. Arduino Uno is used as the main microcontroller to control the entire system, while ESP32-CAM is responsible for capturing images and detecting pests. The YOLOv3 model is trained using the COCO dataset, supplemented with sample images of pests on shallot crops to improve detection accuracy. The training process is conducted using a GPU to speed up model learning. Field tests on shallot crops infested with various types of pests show that this system has a high accuracy rate in detecting pests and effectively provides automatic pesticide spraying responses. The spraying system's effectiveness reaches 93%, ensuring pesticides are sprayed only in areas where pests are detected, thus optimizing pesticide use and reducing negative environmental impacts. This system offers an efficient and environmentally friendly solution for pest control and has significant potential for application in various agricultural scenarios. This research contributes to the improvement of agricultural productivity and the welfare of farmers in Indonesia.
APA, Harvard, Vancouver, ISO, and other styles
47

Pantoni, Rodrigo, and Otávio Toraça Dias. "Detection of Diatraea Saccharalis in images using convolutional neural networks." Revista Engenharia na Agricultura - REVENG 33, Contínua (2025): 32–44. https://doi.org/10.13083/reveng.v33i1.18733.

Full text
Abstract:
Agricultural pests are organisms capable of significantly impacting the yield and quality of cultivated crops. Traditionally, population control of insect pests has relied on methods such as trapping and subsequent analysis of captured individuals to implement specific control actions, such as the use of insecticides. However, advancements in Computer Vision and Deep Learning techniques offer promising ways for more efficient pest detection and management. This study aims to apply Convolutional Neural Networks (CNNs) to detect the insect pest Diatraea saccharalis, a major pest of sugarcane crops. A dataset comprising 945 training images and 470 test images of deceased insects collected from traps was compiled in order to train and test the model. The Yolov8 Computer Vision framework was employed for software implementation. Results indicate promising outcomes, with the trained CNN achieving 96.2% precision and 95.8% recall. The application of Computer Vision in pest management could lead to more timely and accurate detection of pests, reducing the need for widespread insecticide use, enabling specific interventions, and minimizing labor-intensive monitoring tasks. This research highlights the potential of Deep Learning methodologies to enhance agricultural pest management strategies by improving early pest detection, reducing crop damage, and optimizing the use of pest control resources.
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Jae-Hyeon, Chang-Hwan Son, and Hwijong Yi. "Multiscale CenterNet for Pest Detection and Counting." Journal of Korean Institute of Information Technology 20, no. 7 (2022): 111–21. http://dx.doi.org/10.14801/jkiit.2022.20.7.111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

G, Madhavi, Jhansi Rani A., and Srinivasa Rao S. "Pest Detection for Rice Using Artificial Intelligence." International Research Journal on Advanced Science Hub 3, Special Issue ICITCA-2021 5S (2021): 54–60. http://dx.doi.org/10.47392/irjash.2021.140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

T, Devika, Santhiyakumari N, Nagaraj J, Arun S K, Sam Sundhar T, and Siva Sakthi K. "Crop Pest Detection using Convolutional Neural Network." Journal of Soft Computing Paradigm 6, no. 3 (2024): 314–23. http://dx.doi.org/10.36548/jscp.2024.3.007.

Full text
Abstract:
Pests in plants can cause significant losses in agricultural production. As a result, various technologies are used nowadays to improve agriculture's efficiency and make it more sustainable. This research highlights the contribution of machine learning algorithms and image recognition technologies for pest identification. Farmers can use the system to recognize pests and take the necessary actions to reduce them. Convolutional Neural Networks (CNN) is used in this study for image recognition tasks, including pest identification in agricultural fields. The algorithm is trained using the Agricultural Pests Dataset acquired from Kaggle. The experiment results showed that the CNN performed better than the other state-of-the-art machine learning models, with a much lower false rejection rate of 0.12% and an accuracy of 99%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!