To see the other types of publications on this topic, follow the link: GrabCut algorithm.

Journal articles on the topic 'GrabCut algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'GrabCut algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pang, Shangzhen, Tzer Hwai Gilbert Thio, Fei Lu Siaw, Mingju Chen, and Yule Xia. "Research on Improved Image Segmentation Algorithm Based on GrabCut." Electronics 13, no. 20 (2024): 4068. http://dx.doi.org/10.3390/electronics13204068.

Full text
Abstract:
The classic interactive image segmentation algorithm GrabCut achieves segmentation through iterative optimization. However, GrabCut requires multiple iterations, resulting in slower performance. Moreover, relying solely on a rectangular bounding box can sometimes lead to inaccuracies, especially when dealing with complex shapes or intricate object boundaries. To address these issues in GrabCut, an improvement is introduced by incorporating appearance overlap terms to optimize segmentation energy function, thereby achieving optimal segmentation results in a single iteration. This enhancement significantly reduces computational costs while improving the overall segmentation speed without compromising accuracy. Additionally, users can directly provide seed points on the image to more accurately indicate foreground and background regions, rather than relying solely on a bounding box. This interactive approach not only enhances the algorithm’s ability to accurately segment complex objects but also simplifies the user experience. We evaluate the experimental results through qualitative and quantitative analysis. In qualitative analysis, improvements in segmentation accuracy are visibly demonstrated through segmented images and residual segmentation results. In quantitative analysis, the improved algorithm outperforms GrabCut and min_cut algorithms in processing speed. When dealing with scenes where complex objects or foreground objects are very similar to the background, the improved algorithm will display more stable segmentation results.
APA, Harvard, Vancouver, ISO, and other styles
2

Rui, Zhang, Na Ding, Xin-pin Lu, Ying-qi Xu, and Bin-jie Xin. "Fiber Identification in Cross Section of Blended Yarn on Back Propagation Neural Network." AATCC Journal of Research 8, no. 2_suppl (2021): 95–99. http://dx.doi.org/10.14504/ajr.8.s2.19.

Full text
Abstract:
An intelligent recognition algorithm was developed to identify fibers in the cross sections of blended yarn containing meta-aramid 1313 (Nomex), poly(phenylene-1,3,4-oxadiazole) (POD), flame resistant viscose, and flame-resistant vinylon. The yarn cross section image was obtained at x400 magnification. Drawing software was used to manually isolate single fiber images for training the back propagation (BP) neural network model in Matlab language image processing software. The GrabCut algorithm was used to de-noise the image and separate the target from the background. Finally, single fiber images and fiber distributions were obtained through the program. The result showed that the BP neural network model with the GrabCut algorithm can identify fiber type in a complex background more easily and more accurately than traditional algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Subaran, Tiara Lestari, Transmissia Semiawan, and Nurjannah Syakrani. "Mask R-CNN and GrabCut Algorithm for an Image-based Calorie Estimation System." Journal of Information Systems Engineering and Business Intelligence 8, no. 1 (2022): 1–10. http://dx.doi.org/10.20473/jisebi.8.1.1-10.

Full text
Abstract:
Background: A calorie estimation system based on food images uses computer vision technology to recognize and count calories. There are two key processes required in the system: detection and segmentation. Many algorithms can undertake both processes, each algorithm with different levels of accuracy. Objective: This study aims to improve the accuracy of calorie calculation and segmentation processes using a combination of Mask R-CNN and GrabCut algorithms. Methods: The segmentation mask generated from Mask R-CNN and GrabCut were combined to create a new mask, then used to calculate the calorie. By considering the image augmentation technique, the accuracy of the calorie calculation and segmentation processes were observed to evaluate the method’s performance. Results: The proposed method could achieve a satisfying result, with an average calculation error value of less than 10% and an F1 score above 90% in all scenarios. Conclusion: Compared to earlier studies, the combination of Mask R-CNN and GrabCut could obtain a more satisfying result in calculating food calories with different shapes. Keywords: Augmentation, Calorie Calculation, Detection
APA, Harvard, Vancouver, ISO, and other styles
4

ZHOU, Liangfen, and Jiannong HE. "Improved image segmentation algorithm based on GrabCut." Journal of Computer Applications 33, no. 1 (2013): 49–52. http://dx.doi.org/10.3724/sp.j.1087.2013.00049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Duan, Fuzhou, Yanyan Wu, Hongliang Guan, and Chenbo Wu. "Saliency Detection of Light Field Images by Fusing Focus Degree and GrabCut." Sensors 22, no. 19 (2022): 7411. http://dx.doi.org/10.3390/s22197411.

Full text
Abstract:
In the light field image saliency detection task, redundant cues are introduced due to computational methods. Inevitably, it leads to the inaccurate boundary segmentation of detection results and the problem of the chain block effect. To tackle this issue, we propose a method for salient object detection (SOD) in light field images that fuses focus and GrabCut. The method improves the light field focus calculation based on the spatial domain by performing secondary blurring processing on the focus image and effectively suppresses the focus information of out-of-focus areas in different focus images. Aiming at the redundancy of focus cues generated by multiple foreground images, we use the optimal single foreground image to generate focus cues. In addition, aiming at the fusion of various cues in the light field in complex scenes, the GrabCut algorithm is combined with the focus cue to guide the generation of color cues, which realizes the automatic saliency target segmentation of the image foreground. Extensive experiments are conducted on the light field dataset to demonstrate that our algorithm can effectively segment the salient target area and background area under the light field image, and the outline of the salient object is clear. Compared with the traditional GrabCut algorithm, the focus degree is used instead of artificial Interactively initialize GrabCut to achieve automatic saliency segmentation.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Jianyun, Hui Wang, Shuang Lin, and Li He. "Design of an Image Integrated Processing System for Improving Efficiency of Electric Vehicle Supporting Products." Mobile Information Systems 2022 (August 24, 2022): 1–9. http://dx.doi.org/10.1155/2022/1949962.

Full text
Abstract:
This study analyzes a solution that requires efficient and comprehensive processing of images of a large number of vehicles and their related parts, such as batteries, plastic fastening components, and brake discs, during the design investigation of electric vehicle accessories. The problem involves the extraction of the outer contours of different components, which is important to build a comprehensive image processing system that can handle different vehicle accessories. In this study, a comprehensive image processing system is proposed, which introduces an improved GrabCut and computer vision methods. It can complete the positioning of vehicle batteries, the fastening of automobile components, and the identification of brake discs, which improves the efficiency of inspection and design work. The improved GrabCut uses adaptive median filtering on the electric car accessory to reduce noise from the surface in variable degrees. The image is then sharpened using the Laplacian operator, followed by a contrast-limited histogram equalization (CLAHE) algorithm to boost the image brightness. We have compared our proposed work against existing techniques, i.e., the GrabCut algorithm, region growing algorithm, and K-means algorithm. The comparison clearly shows that our proposed work achieves a much better peak signal-to-noise ratio value as compared to the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Chongyi, Wanyu Huang, Ruoqi Zhang, and Rui Kong. "Portrait Extraction Algorithm Based on Face Detection and Image Segmentation." Computer and Information Science 12, no. 2 (2019): 1. http://dx.doi.org/10.5539/cis.v12n2p1.

Full text
Abstract:
Aiming to solve a series of problems in photo collection over citizen’s license, this paper proposes Portrait Extraction Algorithm over our face based on facial detection technology and state-of-the-art image segmentation algorithm. Considering an input image where the foreground stands a man with unfixed size and its background is all sorts of complicated background, firstly we use Haar&Adaboost facial detection algorithm as a preprocessing method so as to divide the image into different sub-systems, and we get a fix-sized image of human face. Then we use GrabCut and closed-form algorithm to segment the preprocessed image and output an image which satisfies our requirements (i.e. the fixed size and fixed background). Up to now the GrabCut and closed-form algorithm has been realized, both of which have its own advantages and shortages.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang Cuijun, 张翠军, and 赵娜 Zhao Na. "Improved GrabCut Algorithm Based on Probabilistic Neural Network." Laser & Optoelectronics Progress 58, no. 2 (2021): 0210024. http://dx.doi.org/10.3788/lop202158.0210024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yong, Zhang, Yuan Jiazheng, Liu Hongzhe, and Li Qing. "GrabCut image segmentation algorithm based on structure tensor." Journal of China Universities of Posts and Telecommunications 24, no. 2 (2017): 38–47. http://dx.doi.org/10.1016/s1005-8885(17)60197-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rui, Wang, Jin Ye Peng, Li Ping Che, and Yu Ting Hou. "Improved Color Image Segmentation Algorithm Based on GrabCut." Applied Mechanics and Materials 373-375 (August 2013): 464–67. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.464.

Full text
Abstract:
In realistic image processing, it is a problem of image foreground extraction. For a large number of color image processing, an important requirement is the automation of the extraction process. In this paper, by automatically setting foreground seed, we improve the image existing segmentation algorithm; by automatically searching image segmentation region, we accomplish image segmentation with the GrabCut algorithm, which is based on Gaussian Mixture Model and boundary computing. The improved algorithm in this paper can achieve the automation of image segmentation, without user participation in the implementation process, at the same time, it improves the efficiency of image segmentation, and gets a good result of image segmentation in complex background.
APA, Harvard, Vancouver, ISO, and other styles
11

Jue, Li. "Foreground Extraction Based on Dual-Camera System." Applied Mechanics and Materials 50-51 (February 2011): 673–77. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.673.

Full text
Abstract:
This paper presents an automatic foreground segmentation algorithm for stereo image pair captured by a dual-camera system. Being different from the monocular image, binocular images contain the disparity map between the stereo image pair. For the disparity map is computationally expensive, our approach adopts the residual image with spatial displacement ( , ) to segment the initial trimap automatically. From the residual image, rough region of foreground is clustered as the initial trimap of GrabCut algorithm. Compared with a rectangular region, the calculated trimap is more accurate. After running GrabCut algorithm, the images are segmented into foreground and background layers that comprises of the front objects and back environment. Experimental segmentation results with the original images captured by the dual-camera system indicate that our approach is efficient and promising.
APA, Harvard, Vancouver, ISO, and other styles
12

Tong, Kuangwei, Zhongbin Wang, Lei Si, Chao Tan, and Peiyang Li. "A Novel Pipeline Leak Recognition Method of Mine Air Compressor Based on Infrared Thermal Image Using IFA and SVM." Applied Sciences 10, no. 17 (2020): 5991. http://dx.doi.org/10.3390/app10175991.

Full text
Abstract:
In order to accurately identify the pipeline leak fault of a mine air compressor, a novel intelligent diagnosis method is presented based on the integration of an adaptive wavelet threshold denoising (WTD) algorithm, improved firefly algorithm (IFA), Otsu-Grabcut image segmentation algorithm, histogram of oriented gradient (HOG), gray-level co-occurrence matrix (GLCM) and support vector machine (SVM). In the proposed method, the adaptive step strategy and local optimal firefly self-search strategy for the basic firefly algorithm (FA) are used to improve the optimization effect. The infrared thermal image is denoised by using wavelet threshold algorithm which is optimized by IFA (WTD-IFA). The Otsu-Grabcut algorithm is used to segment the image and extract the target. The HOG and GLCM are calculated to reveal the intrinsic characteristics of the infrared thermal image to extract feature vectors. Then the IFA is utilized to optimize the parameters of SVM so as to construct an optimal classifier for fault diagnosis. Finally, the proposed fault diagnosis method is fully evaluated by experimentation and the results verify its feasibility and superiority.
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Chao, Dongping Zhang, Zhengning Zhang, and Zhiyong Feng. "BgCut: Automatic Ship Detection from UAV Images." Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/171978.

Full text
Abstract:
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.
APA, Harvard, Vancouver, ISO, and other styles
14

Mao, Jiateng, and Yueli Hu. "Obstacle contour extraction method based on improved Grabcut algorithm." Journal of Physics: Conference Series 1303 (August 2019): 012051. http://dx.doi.org/10.1088/1742-6596/1303/1/012051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jeyalakshmi, S., and R. Radha. "A NOVEL APPROACH TO SEGMENT LEAF REGION FROM PLANT LEAF IMAGE USING AUTOMATIC ENHANCED GRABCUT ALGORITHM." COMPUSOFT: An International Journal of Advanced Computer Technology 08, no. 11 (2019): 3485–93. https://doi.org/10.5281/zenodo.14912024.

Full text
Abstract:
Segmentation of leaf region from background is one of the essential pre-processing steps required in the Plant Leaf Image Processing. This paper proposes an innovative segmentation approach for extracting color leaf region from the healthy or infected plant leaf image with background using an enhanced automatic GrabCut algorithm that does not take any input from the user. In this method, first GrabCut algorithm was applied on the original image. The algorithm removes background but shadows remain in the resultant image which may cause misinterpretations in further processing steps. Hence, the shadows in the image were removed by thresholding „a" and „b" components of CIELAB color space. This step created holes in the infected region, which had similar color as that of shadow, of the leaf image. Hence, the image obtained was binarized and holes were filled with white (foreground) colorizing Flood Fill algorithm. From this binary image containing only leaf region, the color leaf region of the image was filtered. The accuracy achieved was 98%. 
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Xuchao, Shigang Wang, Xueshan Gao, et al. "An H-GrabCut Image Segmentation Algorithm for Indoor Pedestrian Background Removal." Sensors 23, no. 18 (2023): 7937. http://dx.doi.org/10.3390/s23187937.

Full text
Abstract:
In the context of predicting pedestrian trajectories for indoor mobile robots, it is crucial to accurately measure the distance between indoor pedestrians and robots. This study aims to address this requirement by extracting pedestrians as regions of interest and mitigating issues related to inaccurate depth camera distance measurements and illumination conditions. To tackle these challenges, we focus on an improved version of the H-GrabCut image segmentation algorithm, which involves four steps for segmenting indoor pedestrians. Firstly, we leverage the YOLO-V5 object recognition algorithm to construct detection nodes. Next, we propose an enhanced BIL-MSRCR algorithm to enhance the edge details of pedestrians. Finally, we optimize the clustering features of the GrabCut algorithm by incorporating two-dimensional entropy, UV component distance, and LBP texture feature values. The experimental results demonstrate that our algorithm achieves a segmentation accuracy of 97.13% in both the INRIA dataset and real-world tests, outperforming alternative methods in terms of sensitivity, missegmentation rate, and intersection-over-union metrics. These experiments confirm the feasibility and practicality of our approach. The aforementioned findings will be utilized in the preliminary processing of indoor mobile robot pedestrian trajectory prediction and enable path planning based on the predicted results.
APA, Harvard, Vancouver, ISO, and other styles
17

Kang, Feilong, Chunguang Wang, Jia Li, and Zheying Zong. "A Multiobjective Piglet Image Segmentation Method Based on an Improved Noninteractive GrabCut Algorithm." Advances in Multimedia 2018 (July 3, 2018): 1–9. http://dx.doi.org/10.1155/2018/1083876.

Full text
Abstract:
In the video monitoring of piglets in pig farms, study of the precise segmentation of foreground objects is the work of advanced research on target tracking and behavior recognition. In view of the noninteractive and real-time requirements of such a video monitoring system, this paper proposes a method of image segmentation based on an improved noninteractive GrabCut algorithm. The functions of preserving edges and noise reduction are realized through bilateral filtering. An adaptive threshold segmentation method is used to calculate the local threshold and to complete the extraction of the foreground target. The image is simplified by morphological processing; the background interference pixels, such as details in the grille and wall, are filtered, and the foreground target marker matrix is established. The GrabCut algorithm is used to split the pixels of multiple foreground objects. By comparing the segmentation results of various algorithms, the results show that the segmentation algorithm proposed in this paper is efficient and accurate, and the mean range of structural similarity is [0.88, 1]. The average processing time is 1606 ms, and this method satisfies the real-time requirement of an agricultural video monitoring system. Feature vectors such as edges and central moments are calculated and the database is well established for feature extraction and behavior identification. This method provides reliable foreground segmentation data for the intelligent early warning of a video monitoring system.
APA, Harvard, Vancouver, ISO, and other styles
18

Na, In Seop, Yan Juan Chen, and Soo Hyung Kim. "Automatic Segmentation of Product Bottle Label Based on GrabCut Algorithm." International Journal of Contents 10, no. 4 (2014): 1–10. http://dx.doi.org/10.5392/ijoc.2014.10.4.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Jiang, Tao. "Artificial Intelligence Aerobics Action Image Simulation Based on the Image Segmentation Algorithm." Mobile Information Systems 2022 (October 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/7438159.

Full text
Abstract:
At present, aerobics is becoming a popular fashion with the continuous development of cultural needs. Because aerobics has the characteristics of many movements, rapid changes, strong complexity, and difficult performance of difficult movements, the current aerobics teaching still presents shortcomings such as low teaching level, limited teachers’ resources, and energy. Therefore, it is difficult to effectively meet the actual learning needs of students. Based on this point, artificial intelligence can be used to simulate and guide the technical movements of aerobics to effectively teach students. In this paper, an artificial intelligence aerobics image simulation system is researched and developed and the GrabCut image segmentation algorithm is mainly used. After analyzing some shortcomings of the algorithm, the GrabCut algorithm cascade and graph-based are selected to complete the optimization, so as to lay a good system foundation and then build the aerobics artificial intelligence image simulation system according to the algorithm foundation. Finally, it analyzes and researches the actual problems of aerobics teaching activities in colleges and universities and focuses on the problems, achievements, and personal satisfaction of students who use the system in actual learning, which proves that the system can effectively assist aerobics teaching activities. By studying the image segmentation algorithm and artificial intelligence technology, this paper applies it to the field of aerobics action image simulation, so as to promote its technological development.
APA, Harvard, Vancouver, ISO, and other styles
20

Deng, Lei Lei. "Pre-detection Technology of Clothing Image Segmentation Based on GrabCut Algorithm." Wireless Personal Communications 102, no. 2 (2017): 599–610. http://dx.doi.org/10.1007/s11277-017-5050-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cao, Jian Rong, Yang Xu, and Cai Yun Liu. "Algorithm of Surveillance Video Synopsis Based on Objects." Applied Mechanics and Materials 321-324 (June 2013): 1041–45. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.1041.

Full text
Abstract:
After background modeling and segmenting of moving object for surveillance video, this paper firstly presented a noninteractive matting algorithm of video moving object based on GrabCut. These matted moving objects then were placed in a background image on the condition of nonoverlapping arrangement, so a frame could be obtained with several moving objects placed in a background image. Finally, a series of these frame images could be achieved in timeline and a single camera surveillance video synopsis could be formed. The experimental results show that this video synopsis has the features of conciseness and readable concentrated form and the efficiency of browsing and retrieval can be improved.
APA, Harvard, Vancouver, ISO, and other styles
22

Woodward-Greene, M. Jennifer, Jason M. Kinser, Tad S. Sonstegard, Johann Sölkner, Iosif I. Vaisman, and Curtis P. Van Tassell. "PreciseEdge raster RGB image segmentation algorithm reduces user input for livestock digital body measurements highly correlated to real-world measurements." PLOS ONE 17, no. 10 (2022): e0275821. http://dx.doi.org/10.1371/journal.pone.0275821.

Full text
Abstract:
Computer vision is a tool that could provide livestock producers with digital body measures and records that are important for animal health and production, namely body height and length, and chest girth. However, to build these tools, the scarcity of labeled training data sets with uniform images (pose, lighting) that also represent real-world livestock can be a challenge. Collecting images in a standard way, with manual image labeling is the gold standard to create such training data, but the time and cost can be prohibitive. We introduce the PreciseEdge image segmentation algorithm to address these issues by employing a standard image collection protocol with a semi-automated image labeling method, and a highly precise image segmentation for automated body measurement extraction directly from each image. These elements, from image collection to extraction are designed to work together to yield values highly correlated to real-world body measurements. PreciseEdge adds a brief preprocessing step inspired by chromakey to a modified GrabCut procedure to generate image masks for data extraction (body measurements) directly from the images. Three hundred RGB (red, green, blue) image samples were collected uniformly per the African Goat Improvement Network Image Collection Protocol (AGIN-ICP), which prescribes camera distance, poses, a blue backdrop, and a custom AGIN-ICP calibration sign. Images were taken in natural settings outdoors and in barns under high and low light, using a Ricoh digital camera producing JPG images (converted to PNG prior to processing). The rear and side AGIN-ICP poses were used for this study. PreciseEdge and GrabCut image segmentation methods were compared for differences in user input required to segment the images. The initial bounding box image output was captured for visual comparison. Automated digital body measurements extracted were compared to manual measures for each method. Both methods allow additional optional refinement (mouse strokes) to aid the segmentation algorithm. These optional mouse strokes were captured automatically and compared. Stroke count distributions for both methods were not normally distributed per Kolmogorov-Smirnov tests. Non-parametric Wilcoxon tests showed the distributions were different (p< 0.001) and the GrabCut stroke count was significantly higher (p = 5.115 e-49), with a mean of 577.08 (std 248.45) versus 221.57 (std 149.45) with PreciseEdge. Digital body measures were highly correlated to manual height, length, and girth measures, (0.931, 0.943, 0.893) for PreciseEdge and (0.936, 0. 944, 0.869) for GrabCut (Pearson correlation coefficient). PreciseEdge image segmentation allowed for masks yielding accurate digital body measurements highly correlated to manual, real-world measurements with over 38% less user input for an efficient, reliable, non-invasive alternative to livestock hand-held direct measuring tools.
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Min ho, Jin yeong Choi, Jong hyeok Lee, and Jae sang Cha. "Research on Infrastructure technology of Stereoscopic Object Expression Utilizing the Grabcut algorithm." Journal of The Korea Institute of Intelligent Transport Systems 17, no. 5 (2018): 151–59. http://dx.doi.org/10.12815/kits.2018.17.5.151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mao, Jiafa, Kaihui Wang, Yahong Hu, Weiguo Sheng, and Qixin Feng. "GrabCut algorithm for dental X-ray images based on full threshold segmentation." IET Image Processing 12, no. 12 (2018): 2330–35. http://dx.doi.org/10.1049/iet-ipr.2018.5730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Guo, Rongxin, Xian Sun, Kaiqiang Chen, et al. "JMLNet: Joint Multi-Label Learning Network for Weakly Supervised Semantic Segmentation in Aerial Images." Remote Sensing 12, no. 19 (2020): 3169. http://dx.doi.org/10.3390/rs12193169.

Full text
Abstract:
Weakly supervised semantic segmentation in aerial images has attracted growing research attention due to the significant saving in annotation cost. Most of the current approaches are based on one specific pseudo label. These methods easily overfit the wrongly labeled pixels from noisy label and limit the performance and generalization of the segmentation model. To tackle these problems, we propose a novel joint multi-label learning network (JMLNet) to help the model learn common knowledge from multiple noisy labels and prevent the model from overfitting one specific label. Our combination strategy of multiple proposals is that we regard them all as ground truth and propose three new multi-label losses to use the multi-label guide segmentation model in the training process. JMLNet also contains two methods to generate high-quality proposals, which further improve the performance of the segmentation task. First we propose a detection-based GradCAM (GradCAMD) to generate segmentation proposals from object detectors. Then we use GradCAMD to adjust the GrabCut algorithm and generate segmentation proposals (GrabCutC). We report the state-of-the-art results on the semantic segmentation task of iSAID and mapping challenge dataset when training with bounding boxes annotations.
APA, Harvard, Vancouver, ISO, and other styles
26

Magaraja, Anousouya Devi, Ezhilarasie Rajapackiyam, Vaitheki Kanagaraj, et al. "A Hybrid Linear Iterative Clustering and Bayes Classification-Based GrabCut Segmentation Scheme for Dynamic Detection of Cervical Cancer." Applied Sciences 12, no. 20 (2022): 10522. http://dx.doi.org/10.3390/app122010522.

Full text
Abstract:
Cervical cancer earlier detection remains indispensable for enhancing the survival rate probability among women patients worldwide. The early detection of cervical cancer is done relatively by using the Pap Smear cell Test. This method of detection is challenged by the degradation phenomenon within the image segmentation task that arises when the superpixel count is minimized. This paper introduces a Hybrid Linear Iterative Clustering and Bayes classification-based GrabCut Segmentation Technique (HLC-BC-GCST) for the dynamic detection of Cervical cancer. In this proposed HLC-BC-GCST approach, the Linear Iterative Clustering process is employed to cluster the potential features of the preprocessed image, which is then combined with GrabCut to prevent the issues that arise when the number of superpixels is minimized. In addition, the proposed HLC-BC-GCST scheme benefits of the advantages of the Gaussian mixture model (GMM) on the extracted features from the iterative clustering method, based on which the mapping is performed to describe the energy function. Then, Bayes classification is used for reconstructing the graph cut model from the extracted energy function derived from the GMM model-based Linear Iterative Clustering features for better computation and implementation. Finally, the boundary optimization method is utilized to considerably minimize the roughness of cervical cells, which contains the cytoplasm and nuclei regions, using the GrabCut algorithm to facilitate improved segmentation accuracy. The results of the proposed HLC-BC-GCST scheme are 6% better than the results obtained by other standard detection approaches of cervical cancer using graph cuts.
APA, Harvard, Vancouver, ISO, and other styles
27

Wu, Shibin, Shaode Yu, Ling Zhuang, et al. "Automatic Segmentation of Ultrasound Tomography Image." BioMed Research International 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/2059036.

Full text
Abstract:
Ultrasound tomography (UST) image segmentation is fundamental in breast density estimation, medicine response analysis, and anatomical change quantification. Existing methods are time consuming and require massive manual interaction. To address these issues, an automatic algorithm based on GrabCut (AUGC) is proposed in this paper. The presented method designs automated GrabCut initialization for incomplete labeling and is sped up with multicore parallel programming. To verify performance, AUGC is applied to segment thirty-two in vivo UST volumetric images. The performance of AUGC is validated with breast overlapping metrics (Dice coefficient (D), Jaccard (J), and False positive (FP)) and time cost (TC). Furthermore, AUGC is compared to other methods, including Confidence Connected Region Growing (CCRG), watershed, and Active Contour based Curve Delineation (ACCD). Experimental results indicate that AUGC achieves the highest accuracy (D=0.9275 and J=0.8660 and FP=0.0077) and takes on average about 4 seconds to process a volumetric image. It was said that AUGC benefits large-scale studies by using UST images for breast cancer screening and pathological quantification.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Y., H. Li, Y. Han, and F. Yu. "RESEARCH ON METHOD OF INTERACTIVE SEGMENTATION BASED ON REMOTE SENSING IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 13, 2017): 961–64. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-961-2017.

Full text
Abstract:
In this paper, we aim to solve the object extraction problem in remote sensing images using interactive segmentation tools. Firstly, an overview of the interactive segmentation algorithm is proposed. Then, our detailed implementation of intelligent scissors and GrabCut for remote sensing images is described. Finally, several experiments on different typical features (water area, vegetation) in remote sensing images are performed respectively. Compared with the manual result, it indicates that our tools maintain good feature boundaries and show good performance.
APA, Harvard, Vancouver, ISO, and other styles
29

Ünver, Halil Murat, and Enes Ayan. "Skin Lesion Segmentation in Dermoscopic Images with Combination of YOLO and GrabCut Algorithm." Diagnostics 9, no. 3 (2019): 72. http://dx.doi.org/10.3390/diagnostics9030072.

Full text
Abstract:
Skin lesion segmentation has a critical role in the early and accurate diagnosis of skin cancer by computerized systems. However, automatic segmentation of skin lesions in dermoscopic images is a challenging task owing to difficulties including artifacts (hairs, gel bubbles, ruler markers), indistinct boundaries, low contrast and varying sizes and shapes of the lesion images. This paper proposes a novel and effective pipeline for skin lesion segmentation in dermoscopic images combining a deep convolutional neural network named as You Only Look Once (YOLO) and the GrabCut algorithm. This method performs lesion segmentation using a dermoscopic image in four steps: 1. Removal of hairs on the lesion, 2. Detection of the lesion location, 3. Segmentation of the lesion area from the background, 4. Post-processing with morphological operators. The method was evaluated on two publicly well-known datasets, that is the PH2 and the ISBI 2017 (Skin Lesion Analysis Towards Melanoma Detection Challenge Dataset). The proposed pipeline model has achieved a 90% sensitivity rate on the ISBI 2017 dataset, outperforming other deep learning-based methods. The method also obtained close results according to the results obtained from other methods in the literature in terms of metrics of accuracy, specificity, Dice coefficient, and Jaccard index.
APA, Harvard, Vancouver, ISO, and other styles
30

Zha, Jiale, Huaixin Chen, Chengjie Ren, Chenggang Wang, and Siqi Li. "A novel method of extracting geometric features of ships based on GrabCut algorithm." Journal of Physics: Conference Series 1693 (December 2020): 012092. http://dx.doi.org/10.1088/1742-6596/1693/1/012092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Guo, Jiayi, Xuelin Guo, and Limin Wang. "The Detection Algorithm of Broken Wires in Power Lines Based on Grabcut Segmentation." IOP Conference Series: Materials Science and Engineering 768 (March 31, 2020): 072017. http://dx.doi.org/10.1088/1757-899x/768/7/072017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sen Saxena, Vivek, Prashant Johri, and Avneesh Kumar. "AI-Enabled Support System for Melanoma Detection and Classification." International Journal of Reliable and Quality E-Healthcare 10, no. 4 (2021): 58–75. http://dx.doi.org/10.4018/ijrqeh.2021100104.

Full text
Abstract:
Skin lesion melanoma is the deadliest type of cancer. Artificial intelligence provides the power to classify skin lesions as melanoma and non-melanoma. The proposed system for melanoma detection and classification involves four steps: pre-processing, resizing all the images, removing noise and hair from dermoscopic images; image segmentation, identifying the lesion area; feature extraction, extracting features from segmented lesion and classification; and categorizing lesion as malignant (melanoma) and benign (non-melanoma). Modified GrabCut algorithm is employed to generate skin lesion. Segmented lesions are classified using machine learning algorithms such as SVM, k-NN, ANN, and logistic regression and evaluated on performance metrics like accuracy, sensitivity, and specificity. Results are compared with existing systems and achieved higher similarity index and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
33

Kadhim, N. M. S. M., M. Mourshed, and M. T. Bray. "SHADOW DETECTION FROM VERY HIGH RESOLUTON SATELLITE IMAGE USING GRABCUT SEGMENTATION AND RATIO-BAND ALGORITHMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W2 (March 10, 2015): 95–101. http://dx.doi.org/10.5194/isprsarchives-xl-3-w2-95-2015.

Full text
Abstract:
Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of the ratio algorithm. The differences in the characteristics of the two satellite imageries in terms of spatial and spectral resolution can play an important role in the estimation and detection of the shadow of urban objects.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Su Huan, and Jian Yin. "Color Filter Based on Image ROI Extraction." Applied Mechanics and Materials 246-247 (December 2012): 1121–24. http://dx.doi.org/10.4028/www.scientific.net/amm.246-247.1121.

Full text
Abstract:
With the rapid development of Internet, more and more enterprises establish business sites to achieve the purpose of online transactions. Taking taobao.com for example, hundreds of millions of goods trade on the trading platform. In front of the huge commodity image database, extraction of image features is very convenient for people to find out images of user requirement. This paper focus mainly on the color feature of images. Firstly, we segment ROI of images using grabCut algorithm; secondly, we extract primary color of images by using dominant color descriptor of MPEG 7; Thirdly, we adopt RGB color quantization to quantize the primary color. Finally achieve the purpose of image color navigation. I have done experiment to compare with some other methods, and find that the algorithms I adopted make a better performance.
APA, Harvard, Vancouver, ISO, and other styles
35

Fu, Ruigang, Biao Li, Yinghui Gao, and Ping Wang. "Fully automatic figure-ground segmentation algorithm based on deep convolutional neural network and GrabCut." IET Image Processing 10, no. 12 (2016): 937–42. http://dx.doi.org/10.1049/iet-ipr.2016.0009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zheng, Junwen, Yuhui Zhang, and Ying Wang. "Research on apple detection and maturity assessment based on computer vision technology." Highlights in Science, Engineering and Technology 101 (May 20, 2024): 423–31. http://dx.doi.org/10.54097/vysbg371.

Full text
Abstract:
The study is based on computer vision technology for apple detection and maturity assessment. Firstly, the YOLOv7 algorithm is used for target detection of apples, and the number of apples in the image is counted to generate a distribution histogram of the number of apples. Then, the position of each apple is detected by the YOLOv7 algorithm and a 2D scatter plot of the geometric coordinates of the apples is drawn. Next, apple foreground was extracted by interactive ROI tagging and GrabCut algorithms, and a mathematical model was developed to assess the maturity of apples based on the histogram analysis of apple colours in HSV colour space model. In addition, the Faster R-CNN model is used to detect apples and estimate the quality of apples based on the 2D area of apples to generate a distribution histogram of apple quality. Finally, a convolutional neural network is used to build a fruit recognition model and draw a distribution histogram of apple image ID numbers. This study provides computer vision technical support for apple picking robots to improve apple production efficiency.
APA, Harvard, Vancouver, ISO, and other styles
37

Mohammed, H. M., and N. El-Sheimy. "SEGMENTATION OF IMAGE PAIRS FOR 3D RECONSTRUCTION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W16 (September 17, 2019): 175–80. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w16-175-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Image segmentation is an essential task in many computer vision applications such as object detection and recognition, object tracking, image classification, 3D reconstruction. Most of the current techniques utilise the colour or grayscale information of an image without considering the camera geometry. In this paper, a method is proposed to utilise the camera relative orientation of a pair of images to find a reliable object segmentation. The inputs to the method are a rectified image pair and a disparity map which could be computed from the rectified image pair, the disparity map is used to determine a set of local homographies between planar surfaces in the two images. The planar surfaces are corresponding to image segments despite the inconsistency of the RGB information. Homography based segmentation alone is not reliable due to possible noise in the disparity map and existence of non-planar objects in the scene. Therefore, an RGB technique is used as a complementary approach to enhance the segmentation result. Two colour-based segmentation techniques are used here, the first is the colour edge detector, and the second is Grabcut. Experimental results show the although the colour edge detector is a simpler algorithm than Grabcut, it does not include noisy data in the segmentation results. the This useful for 3D reconstruction, as it is preferable to exclude noisy areas like the sky and window glass. The outcome of the proposed segmentation algorithm is an object-based segmentation of the pair of images as well as a segmented disparity map.</p>
APA, Harvard, Vancouver, ISO, and other styles
38

Wilkowski, Artur, Maciej Stefańczyk, and Włodzimierz Kasprzak. "Training Data Extraction and Object Detection in Surveillance Scenario." Sensors 20, no. 9 (2020): 2689. http://dx.doi.org/10.3390/s20092689.

Full text
Abstract:
Police and various security services use video analysis for securing public space, mass events, and when investigating criminal activity. Due to a huge amount of data supplied to surveillance systems, some automatic data processing is a necessity. In one typical scenario, an operator marks an object in an image frame and searches for all occurrences of the object in other frames or even image sequences. This problem is hard in general. Algorithms supporting this scenario must reconcile several seemingly contradicting factors: training and detection speed, detection reliability, and learning from small data sets. In the system proposed here, we use a two-stage detector. The first region proposal stage is based on a Cascade Classifier while the second classification stage is based either on a Support Vector Machines (SVMs) or Convolutional Neural Networks (CNNs). The proposed configuration ensures both speed and detection reliability. In addition to this, an object tracking and background-foreground separation algorithm is used, supported by the GrabCut algorithm and a sample synthesis procedure, in order to collect rich training data for the detector. Experiments show that the system is effective, useful, and applicable to practical surveillance tasks.
APA, Harvard, Vancouver, ISO, and other styles
39

Fang, Chaowei, Ziyin Zhou, Junye Chen, Hanjing Su, Qingyao Wu, and Guanbin Li. "Variance-Insensitive and Target-Preserving Mask Refinement for Interactive Image Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (2024): 1698–706. http://dx.doi.org/10.1609/aaai.v38i2.27937.

Full text
Abstract:
Point-based interactive image segmentation can ease the burden of mask annotation in applications such as semantic segmentation and image editing. However, fully extracting the target mask with limited user inputs remains challenging. We introduce a novel method, Variance-Insensitive and Target-Preserving Mask Refinement to enhance segmentation quality with fewer user inputs. Regarding the last segmentation result as the initial mask, an iterative refinement process is commonly employed to continually enhance the initial mask. Nevertheless, conventional techniques suffer from sensitivity to the variance in the initial mask. To circumvent this problem, our proposed method incorporates a mask matching algorithm for ensuring consistent inferences from different types of initial masks. We also introduce a target-aware zooming algorithm to preserve object information during downsampling, balancing efficiency and accuracy. Experiments on GrabCut, Berkeley, SBD, and DAVIS datasets demonstrate our method's state-of-the-art performance in interactive image segmentation.
APA, Harvard, Vancouver, ISO, and other styles
40

Cui, Weihong, Guofeng Wang, Chenyi Feng, Yiwei Zheng, Jonathan Li, and Yi Zhang. "SPMK AND GRABCUT BASED TARGET EXTRACTION FROM HIGH RESOLUTION REMOTE SENSING IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 195–203. http://dx.doi.org/10.5194/isprs-archives-xli-b7-195-2016.

Full text
Abstract:
Target detection and extraction from high resolution remote sensing images is a basic and wide needed application. In this paper, to improve the efficiency of image interpretation, we propose a detection and segmentation combined method to realize semi-automatic target extraction. We introduce the dense transform color scale invariant feature transform (TC-SIFT) descriptor and the histogram of oriented gradients (HOG) & HSV descriptor to characterize the spatial structure and color information of the targets. With the k-means cluster method, we get the bag of visual words, and then, we adopt three levels’ spatial pyramid (SP) to represent the target patch. After gathering lots of different kinds of target image patches from many high resolution UAV images, and using the TC-SIFT-SP and the multi-scale HOG & HSV feature, we constructed the SVM classifier to detect the target. In this paper, we take buildings as the targets. Experiment results show that the target detection accuracy of buildings can reach to above 90%. Based on the detection results which are a series of rectangle regions of the targets. We select the rectangle regions as candidates for foreground and adopt the GrabCut based and boundary regularized semi-auto interactive segmentation algorithm to get the accurate boundary of the target. Experiment results show its accuracy and efficiency. It can be an effective way for some special targets extraction.
APA, Harvard, Vancouver, ISO, and other styles
41

Park, Jong-Hun, Gang-Seong Lee, and Sang-Hun Lee. "A Study on the Convergence Technique enhanced GrabCut Algorithm Using Color Histogram and modified Sharpening filter." Journal of the Korea Convergence Society 6, no. 6 (2015): 1–8. http://dx.doi.org/10.15207/jkcs.2015.6.6.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sun, Sashuang, Mei Jiang, Dongjian He, Yan Long, and Huaibo Song. "Recognition of green apples in an orchard environment by combining the GrabCut model and Ncut algorithm." Biosystems Engineering 187 (November 2019): 201–13. http://dx.doi.org/10.1016/j.biosystemseng.2019.09.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Busireddy, Seshakagari Haranadha Reddy, Venkatramana R, and Jayasree L. "Enhancing Apple Fruit Quality Detection with Augmented YOLOv3 Deep Learning Algorithm." International Journal of Human Computations and Intelligence 4, no. 1 (2025): 386–96. https://doi.org/10.5281/zenodo.14998944.

Full text
Abstract:
Precise apple detection is essential in the food manufacturing industry to provide quality control in production lines for differentiating between fresh and damaged apples. Various apple detection difficulties are found even before harvest in today's environment. However, post-harvest evaluation is still crucial for identifying apple species and assessing quality to expedite food processing procedures. This study presents a sophisticated detection model for multi-class apple recognition to distinguish between regular, damaged, and red delicious apples. The proposed model enhances Augment-YOLOv3 by integrating background removal through GrabCut, thereby improving object localization. Additionally, extra spatial pyramid pooling and a Swish activation function are incorporated to optimize feature retention during training. The YOLOv3 framework is refined using the Darknet53 backbone with feature pyramid network-based spatial pooling, ensuring superior feature extraction before object detection. The final classification layer precisely distinguishes between apple categories. Experimental evaluations reveal that the Augment-YOLOv3 model achieves a mean average precision (mAP) of 98.20%, outperforming conventional YOLOv3 and YOLOv4 models. The study leverages a newly curated Kaggle dataset, utilizing Google Colab with an NVIDIA Tesla K-80 GPU for inference, ensuring precise object localization and robust multi-object detection performance.
APA, Harvard, Vancouver, ISO, and other styles
44

Wei, Hongtao, Lei Tang, Wenshuo Wang, and Jiaming Zhang. "Home Environment Augmented Reality System Based on 3D Reconstruction of a Single Furniture Picture." Sensors 22, no. 11 (2022): 4020. http://dx.doi.org/10.3390/s22114020.

Full text
Abstract:
With the popularization of the concept of “metaverse”, Augmented Reality (AR) technology is slowly being applied to people’s daily life as its underlying technology support. In recent years, rapid 3D reconstruction of interior furniture to meet AR shopping needs has become a new method. In this paper, a virtual home environment system is designed and the related core technologies in the system are studied. Background removal and instance segmentation are performed for furniture images containing complex backgrounds, and a Bayesian Classifier and GrabCut (BCGC) algorithm is proposed to improve on the traditional foreground background separation technique. The reconstruction part takes the classical occupancy network reconstruction algorithm as the network basis and proposes a precise occupancy network (PONet) algorithm, which can reconstruct the structural details of furniture images, and the model accuracy is improved. Because the traditional 3D registration model is prone to the problems of model position shift and inaccurate matching with the scene, the AKAZE-based tracking registration algorithm is improved, and a Multiple Filtering-AKAZE (MF-AKAZE) based on AKAZE is proposed to remove the matching points. The matching accuracy is increased by improving the RANSAC filtering mis-matching algorithm based on further screening of the matching results. Finally, the system is verified to realize the function of the AR visualization furniture model, which can better complete the reconstruction as well as registration effect.
APA, Harvard, Vancouver, ISO, and other styles
45

Lu, Zheng, and Dali Chen. "Weakly Supervised and Semi-Supervised Semantic Segmentation for Optic Disc of Fundus Image." Symmetry 12, no. 1 (2020): 145. http://dx.doi.org/10.3390/sym12010145.

Full text
Abstract:
Weakly supervised and semi-supervised semantic segmentation has been widely used in the field of computer vision. Since it does not require groundtruth or it only needs a small number of groundtruths for training. Recently, some works use pseudo groundtruths which are generated by a classified network to train the model, however, this method is not suitable for medical image segmentation. To tackle this challenging problem, we use the GrabCut method to generate the pseudo groundtruths in this paper, and then we train the network based on a modified U-net model with the generated pseudo groundtruths, finally we utilize a small amount of groundtruths to fine tune the model. Extensive experiments on the challenging RIM-ONE and DRISHTI-GS benchmarks strongly demonstrate the effectiveness of our algorithm. We obtain state-of-art results on RIM-ONE and DRISHTI-GS databases.
APA, Harvard, Vancouver, ISO, and other styles
46

Lee, Sungjin, Ahyoung Lee, and Min Hong. "Cardiac CT Image Segmentation for Deep Learning–Based Coronary Calcium Detection Using K-Means Clustering and Grabcut Algorithm." Computer Systems Science and Engineering 46, no. 2 (2023): 2543–54. http://dx.doi.org/10.32604/csse.2023.037055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Choi, Jintak, Seungeun Lee, Kyungtae Kang, and Hyojoong Suh. "Lightweight Machine Learning Method for Real-Time Espresso Analysis." Electronics 13, no. 4 (2024): 800. http://dx.doi.org/10.3390/electronics13040800.

Full text
Abstract:
Coffee crema plays a crucial role in assessing the quality of espresso. In recent years, in response to the rising labor costs, aging population, remote security/authentication needs, civic awareness, and the growing preference for non-face-to-face interactions, robot cafes have emerged. While some people seek sentiment and premium coffee, there are also many who desire quick and affordable options. To align with the trends of this era, there is a need for lightweight artificial intelligence algorithms for easy and quick decision making, as well as monitoring the extraction process in these automated cafes. However, the application of these technologies to actual coffee machines has been limited. In this study, we propose an innovative real-time coffee crema control system that integrates lightweight machine learning algorithms. We employ the GrabCut algorithm to segment the crema region from the rest of the image and use a clustering algorithm to determine the optimal brewing conditions for each cup of espresso based on the characteristics of the crema extracted. Our results demonstrate that our approach can accurately analyze coffee crema in real time. This research proposes a promising direction by leveraging computer vision and machine learning technologies to enhance the efficiency and consistency of coffee brewing. Such an approach enables the prediction of component replacement timing in coffee machines, such as the replacement of water filters, and provides administrators with Before Service. This could lead to the development of fully automated artificial intelligence coffee making systems in the future.
APA, Harvard, Vancouver, ISO, and other styles
48

Choi, Jintak, Seungeun Lee, and Kyungtae Kang. "Espresso Crema Analysis with f-AnoGAN." Mathematics 13, no. 4 (2025): 547. https://doi.org/10.3390/math13040547.

Full text
Abstract:
This study proposes a system that evaluates the quality of espresso crema in real time using the deep learning-based anomaly detection model, f-AnoGAN. The system integrates mobile devices to collect sensor data during the extraction process, enabling quick adjustments for optimal results. Using the GrabCut algorithm to separate crema from the background, the detection accuracy is improved. The experimental results show an increase of 0.13 in ROC-AUC in the CIFAR-10 dataset and, in crema images, ROC-AUC improved from 0.963 to 1.000 by VAE and hyperparameter optimization, achieving the classification of optimal anomalies in the image. A Pearson correlation coefficient of 0.999 confirms the effectiveness of the system. Key contributions include hyperparameter optimization, improved f-AnoGAN performance using VAE, integration of mobile devices, and improved image preprocessing. This research demonstrates the potential of AI in the management of coffee quality.
APA, Harvard, Vancouver, ISO, and other styles
49

Johaira, U. Lidasan, and P. Tagacay Martina. "Mushroom Recognition using Neural Network." International Journal of Computer Science Issues 15, no. 5 (2018): 52–57. https://doi.org/10.5281/zenodo.1467659.

Full text
Abstract:
An application would be beneficial if it is real time and could give its users enough information. This would be of greater advantage for mobile applications. Mushroom Recognition using Neural Network is a mobile-based application that combined the power of neural network with image processing to recognize mushroom image based on its order and family and if it is edible or inedible/poisonous. It is a multi-class classification program that recognizes mushroom image from 3 orders and 8 families defined in this research. The application used the GrabCut algorithm for image segmentation and Probabilistic Neural Network (PNN) as its classifier that trains and classifies the mushroom image. This application used 133 mushroom images as its training data and obtained an accuracy rate of 92%. This could be used as an educational tool both for Biology students and people in IT fields. It could also help mycologists identify wild mushrooms.
APA, Harvard, Vancouver, ISO, and other styles
50

Jin, Zhenxun, Fengyan Zhong, Qiang Zhang, Weisong Wang, and Xuanyin Wang. "Visual detection of tobacco packaging film based on apparent features." International Journal of Advanced Robotic Systems 18, no. 3 (2021): 172988142110248. http://dx.doi.org/10.1177/17298814211024839.

Full text
Abstract:
The main purpose of this article is to study the detection of transparent film on the surface of tobacco packs. Tobacco production line needs an industrial robot to remove the transparent film in the process of unpacking. Therefore, after the industrial robot removes the transparent film, it is necessary to use machine vision technology to determine whether there is transparent film residue on the surface of tobacco packaging. In this article, based on the study of the optical features of semitransparent objects, an algorithm for detecting the residue of transparent film in tobacco packs based on surface features is proposed. According to the difference of surface features between tobacco and film, a probability distribution model considering highlights, saturation, and texture density is designed. Because the probability distribution model integrates many features of tobacco and film, it is more reasonable to distinguish the tobacco film regions. In this article, an appropriate foreground box with a trapezoidal mask and image segmentation algorithm GrabCut is used to segment the foreground area of tobacco pack more accurately, and the possible film area is obtained by image differential and morphological processing. Finally, on the basis of comparing the effect of various machine learning algorithms on the image classification of possible film regions, support vector machine based on color features is used to judge the possible film region. Application results of the system show that the method proposed in this article can effectively detect whether there is film residue on the surface of tobacco pack.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography