To see the other types of publications on this topic, follow the link: Kekre's LUV color space.

Journal articles on the topic 'Kekre's LUV color space'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 30 journal articles for your research on the topic 'Kekre's LUV color space.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Deepa, Abin, and D. Thepade Sudeep. "Video Frame Illumination Inconsistency Reduction using CLAHE with Kekre's LUV Color Space." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (2020): 620–24. https://doi.org/10.35940/ijeat.C5322.029320.

Full text
Abstract:
Visual frame quality is of utmost significance and is relevant in numerous computer vision applications such as object detection, video surveillance, optical motion capture, multimedia and human computer interface. Under controlled or uncontrolled environment, the video visual frame quality gets affected due to illumination variations. This may further hamper the interpretability and may lead to significant loss of information for background modeling. An excellent background model can enhance good visual perception. In this work, local enhancement technique with improved background modeling, Clipped Adaptive Histogram Equalization (CLAHE) is explored with Kekre’s LUV color space to reduce the illumination inconsistency especially with darker set of video frames and a significant improved average entropy of 7.7225 has been obtained, which is higher than the existing explored variations of CLAHE.
APA, Harvard, Vancouver, ISO, and other styles
2

Sudeep, D. Thepade, Awhad Rohan, and Khandelwal Prakhar. "Framework for Color and Texture Feature Fusion in Content Based Image Retrieval using Block Truncation Coding with Color Spaces." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (2021): 769–74. https://doi.org/10.35940/ijeat.C5242.029320.

Full text
Abstract:
With tremendous growth in social media and digital technologies, generation, storing and transfer of huge amount of information over the internet is on the rise. Images or visual mode of communication have been prevailing and widely accepted as a mode of communication since ages. And with the growth of internet, the rate at which images are generated is growing exponentially. But the methods used to retrieve images are still very slow and inefficient, compared to the rate of increase in image databases. To cope up with this explosive increase in images, this information age has seen huge research advancement in Content Based Image Retrieval (CBIR). CBIR systems provide a way of utilizing the 3 major ways in which content is portrayed in images, those are shape, texture and color. In CBIR system, features are extracted from query image and similarity is found with features stored in database for retrieval. This provides an objective way of image retrieval, which is more efficient compared to subjective human annotation. Application specific CBIR systems have been developed and perform really well, but Generic CBIR systems are still under developed. Block Truncation Coding (BTC) has been chosen as a feature extractor. BTC applied directly on input image provides color content-based features of image and BTC applied after applying LBP on the image provide texture content-based features of image. Previous work consists of either color, shape or texture, but usage of more than one descriptor is still in research and might give better performance. The paper presents framework for color and texture feature fusion in content-based image retrieval using block truncation coding with color spaces. Experimentation is carried out on Wang Dataset of 1000 images consisting of 10 classes. Each class has 100 images in it. Obtained results have shown performance improvement using fusion of BTC extracted color features and texture features extracted with BTC applied on Local Binary Patterns (LBP). Conversion of color space from RGB to LUV is done using Kekre's LUV.
APA, Harvard, Vancouver, ISO, and other styles
3

B., Rajesh, and Archana B. "Segmentation of Optic Disc and Optic Cup to Calculate CDR using Kekre’s LUV Color Space for Detecting Glaucoma." International Journal of Computer Applications 127, no. 17 (2015): 7–11. http://dx.doi.org/10.5120/ijca2015906712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xue, Yongan, Jinling Zhao, and Mingmei Zhang. "A Watershed-Segmentation-Based Improved Algorithm for Extracting Cultivated Land Boundaries." Remote Sensing 13, no. 5 (2021): 939. http://dx.doi.org/10.3390/rs13050939.

Full text
Abstract:
To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Sandhopi, Lukman Zaman P.C.S.W, and Yosi Kristian. "Identifikasi Motif Jepara pada Ukiran dengan Memanfaatkan Convolutional Neural Network." Jurnal Nasional Teknik Elektro dan Teknologi Informasi 9, no. 4 (2020): 403–13. http://dx.doi.org/10.22146/jnteti.v9i4.541.

Full text
Abstract:
Semakin berkembang motif ukiran, semakin beragam bentuk dan variasinya. Hal ini menyulitkan dalam menentukan suatu ukiran bermotif Jepara. Pada makalah ini, metode transfer learning dengan FC yang dikembangkan dimanfaatkan untuk mengidentifikasi motif khas Jepara pada suatu ukiran. Dataset dibedakan menjadi tiga color space, yaitu LUV, RGB, dan YcrCb. Selain itu, sliding window, non-max suppression, dan heat maps dimanfaatkan untuk proses penelusuran area objek ukiran dan pengidentifikasian motif Jepara. Hasil pengujian dari semua bobot menunjukkan bahwa Xception pada klasifikasi motif Jepara memiliki nilai akurasi tertinggi, yaitu 0,95, 0,95, dan 0,94 untuk masing-masing dataset color space LUV, RGB, dan YCrCb. Namun, ketika semua bobot model tersebut diterapkan pada sistem identifikasi motif Jepara, ResNet50 mampu mengungguli semua jaringan dengan nilai persentase identifikasi motif sebesar 84%, 79%, dan 80%, untuk masing-masing color space LUV, RGB, dan YCrCb. Hasil ini membuktikan bahwa sistem mampu membantu dalam proses menentukan suatu ukiran, termasuk ke dalam ukiran Jepara atau bukan, dengan mengidentifikasi motif-motif khas Jepara yang terdapat dalam ukiran.
APA, Harvard, Vancouver, ISO, and other styles
6

Thepade, Sudeep D., Shalakha Vijaykumar Bang, Rik Das, and Zahid Akhtar. "Machine learning-based land usage identification using Haralick texture features of aerial images with Kekre's LUV colour space." International Journal of Computational Science and Engineering 25, no. 5 (2022): 562. http://dx.doi.org/10.1504/ijcse.2022.126255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sari, Filiz, and Ali Burak Ulas. "Deep Learning Application in Detecting Glass Defects with Color Space Conversion and Adaptive Histogram Equalization." Traitement du Signal 39, no. 2 (2022): 731–36. http://dx.doi.org/10.18280/ts.390238.

Full text
Abstract:
Manually detecting defects on the surfaces of glass products is a slow and time-consuming process in the quality control process, so computer-aided systems, including image processing and machine learning techniques are used to overcome this problem. In this study, scratch and bubble defects of the jar, photographed in the studio with a white matte background and a -60° peak angle, are investigated with the Yolo-V3 deep learning technique. Obtained performance is 94.65% for the raw data. Color space conversion (CSC) techniques, HSV and CIE-Lab Luv, are applied to the resulting images. V channels select for preprocessing. While the HSV method decreases the performance, an increase has been observed in the CIE-Lab Luv method. With the CIE-Lab Luv method, to which is applied the adaptive histogram equalization, the maximum recall, precision, and F1-score reach above 97%. Also, Yolo-V3 compared with the Faster R-CNN, it is observed that Yolo-V3 gave better results in all analyzes, and the highest overall accuracy is achieved in both methods when adaptive histogram equalization is applied to CIE-Lab Luv.
APA, Harvard, Vancouver, ISO, and other styles
8

Elizabeth, C. P. Blesslin, and K. Usha Kingsly Devi. "Spectral Clustering of Images in LUV Color Space by Spatial-Color Pixel Classification." International Journal of Computer Applications 3, no. 9 (2010): 1–5. http://dx.doi.org/10.5120/771-1082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Song, De Rui, Dao Yan Xu, and Li Li. "A Novel Edge Detection Algorithm of Color Image." Applied Mechanics and Materials 446-447 (November 2013): 976–80. http://dx.doi.org/10.4028/www.scientific.net/amm.446-447.976.

Full text
Abstract:
This paper proposes a novel algorithm of edge detection using LUV color space. Firstly, according to peer group filtering (PGF), a nonlinear algorithm for image smoothing and impulse noise removal in color image is used. Secondly, color image edges in an image are obtained automatically by combining an improved isotropic edge detector and a fast entropy threshold technique. Thirdly, according to color distance between the pixel and its eight neighbor-pixels, color image edges can further be detected. Finally, the experiment demonstrates the outcome of proposed algorithm in color image edge detection.
APA, Harvard, Vancouver, ISO, and other styles
10

Turhal, Umit Cigdem, and Can Dagdelen. "Tensor based statistical segmentation of green vegetation canopy images." Acta Scientiarum. Technology 44 (January 12, 2022): e55708. http://dx.doi.org/10.4025/actascitechnol.v43i1.55708.

Full text
Abstract:
The increase of developments of electronic and computer has been resulted with the commonly use of these technologies in everyday life in many applications. One of these applications is emerged in agricultural applications as precision agriculture studies. The use of technology in agriculture has lots of benefits such as energy saving, yield increase, time saving and etc. In this study, a novel learning based, pixel-wise segmentation method which uses Common Vector Approach (CVA) for the first time in the literature for segmentation is proposed. In the proposed method, first of all color regions belong to both vegetation and soil are manually cropped and then three different color space representations such as HSV, Lab and Luv of RGB images for each color region are encoded as 3rd color tensors. Then by unfolding the color tensor in the mode-3 direction, 2-D color matrix is obtained. The columns of this 2-D color matrix are the dimensional vectors that include (H, S a, b, u, v) components of HSV, Lab and Luv color spaces of an image pixel, and each column vector is accepted as an object. By applying CVA in object space consisting of column vectors of 2-D color matrix, a common color vector which represents common properties of that color region is obtained and used for segmentation purposes. In the experimental studies two different datasets proposed for open computer vision tasks in precision agriculture before in the literature are used. Three different experimental studies are performed for different dataset combinations in terms of training set and the test set. The performance of the proposed method has been compared with the performance of a deep learning method, Convolutional Neural Networks (CNN) based semantic segmentation method. In all of the three experimental studies proposed method achieves extremely high performance according to CNN, especially in the second and in the third experimental studies where dataset combinations include the two of the datasets.
APA, Harvard, Vancouver, ISO, and other styles
11

Tong, Li Ping, Bin Peng, and Yi Wei Fei. "The Color Recognition of Jet Fuel Silver Corrosion Images Based on Color Difference Formula." Advanced Materials Research 503-504 (April 2012): 1033–36. http://dx.doi.org/10.4028/www.scientific.net/amr.503-504.1033.

Full text
Abstract:
This article introduces the basic theoretical knowledge of the multi-color space and its color difference formula. By research and experiment, it validates that HSV and CIE L * a * b * color space and its corresponding color difference formula, which are used in the color recognition of jet fuel silver corrosion image, and their results are mostly in accordance with the recognition results by the naked eyes. And it also proves the feasibility of these two methods for the color recognition of jet fuel silver corrosion. Silver strip corrosion experiment must be tested as one of jet fuel corrosion detection items in jet fuel accepting, providing and storage process. The examination, whether jet fuel is qualified or not, is mainly due to silver corrosion’s color judgment. For computer visual system, the color is the character of object surface, and it is mankind recognition system to the object surface, light shine and visual condition’s comprehensive effect, and it has important function in the picture’s partition and identifying field. The color that is put up by visible light is continuous, and in order to measure and calculate conveniently, some scholars successively establish more than ten color spaces, which are mainly divided three types, by the HSV color space with RGB, HIS, and Munsell color spaces etc. According to particular application color space, YUV and YIQ and CMY color space are adopted by the television system, and CIE color space then includes CIE, XYZ, Lab and Luv etc. This article comparatively studies representative color space as well as RGB, HIS, CMY, YUV and CIE Lab color spaces, which are used for jet fuel silver strip corrosion image’s color recognition accuracy, and this article finally ensures a kind of color space and color difference formula which are applied to jet fuel silver strip corrosion image’s color recognition.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Mingmei, Yongan Xue, Yonghui Ge, and Jinling Zhao. "Watershed Segmentation Algorithm Based on Luv Color Space Region Merging for Extracting Slope Hazard Boundaries." ISPRS International Journal of Geo-Information 9, no. 4 (2020): 246. http://dx.doi.org/10.3390/ijgi9040246.

Full text
Abstract:
To accurately identify slope hazards based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm is proposed. The color difference of the Luv color space was used as the regional similarity measure for region merging. Furthermore, the area relative error for evaluating the image segmentation accuracy was improved and supplemented with the pixel quantity error to evaluate the segmentation accuracy. An unstable slope was identified to validate the algorithm on Chinese Gaofen-2 (GF-2) remote sensing imagery by a multiscale segmentation extraction experiment. The results show the following: (1) the optimal segmentation and merging scale parameters were, respectively, minimum threshold constant C for minimum area Amin of 500 and optimal threshold D for a color difference of 400. (2) The total processing time for segmentation and merging of unstable slopes was 39.702 s, much lower than the maximum likelihood classification method and a little more than the object-oriented classification method. The relative error of the slope hazard area was 4.92% and the pixel quantity error was 1.60%, which were superior to the two classification methods. (3) The evaluation criteria of segmentation accuracy were consistent with the results of visual interpretation and the confusion matrix, indicating that the criteria established in this study are reliable. By comparing the time efficiency, visual effect and classification accuracies, the proposed method has a good comprehensive extraction effect. It can provide a technical reference for promoting the rapid extraction of slope hazards based on remote sensing imagery. Meanwhile, it also provides a theoretical and practical experience reference for improving the watershed segmentation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Ragb, Hussin K., and Vijayan K. Asari. "Local Phase Features in Chromatic Domain for Human Detection." International Journal of Monitoring and Surveillance Technologies Research 4, no. 3 (2016): 52–72. http://dx.doi.org/10.4018/ijmstr.2016070104.

Full text
Abstract:
In this paper, a new descriptor based on phase congruency concept and LUV color space features is presented. Since the phase of the signal conveys more information regarding signal structure than the magnitude and the indispensable quality of the color in describing the world around us, the proposed descriptor can precisely identify and localize image features over the gradient based techniques, especially in the regions affected by illumination changes. The proposed features can be formed by extracting the phase congruency information for each pixel in the three-color image channels. The maximum phase congruency values are selected from the corresponding color channels. Histograms of the phase congruency values of the local regions in the image are computed with respect to its orientation. These histograms are concatenated to construct the proposed descriptor. Results of the experiments performed on the proposed descriptor show that it has better detection performance and lower error rates than a set of the state of the art feature extraction methodologies.
APA, Harvard, Vancouver, ISO, and other styles
14

Lu, Yuanyao, and Qingqing Liu. "Lip Segmentation Based on Combined Color Space and ACM with Rhombic Initial Contour." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 08 (2018): 1856008. http://dx.doi.org/10.1142/s0218001418560086.

Full text
Abstract:
Lip segmentation is one of critical steps in a lip-reading system, because it closely relates to the accuracy of system recognition. In this paper, we aim to improve the accuracy of lip segmentation. A novel color space is proposed which consists of the [Formula: see text] component in the CIE-LUV space and the sum of [Formula: see text]2 and [Formula: see text]3 components of the image after discrete Hartley transform (DHT). We select a rhombus as the initial contour as its shape is approximate to a closed lip shape relatively. These notions are achieved based on the method of the Active contour model. The active contour model (ACM) is performed by the Chan–Vese model, and the result of each component is gained separately. Finally, the ultimate results are obtained by merging the result of each component together. Through experiments we can get a conclusion that this method can get more accurate and smoother lip contour. Meanwhile, the proposed method is more efficient compared with the classic ACM because it avoids some problems in the classic active contour model, like the radius of the initial contour needs to be set manually according to the size of images.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Wending, Hanxing Liu, Yuan Wang, Xiaorui Zheng, and Junguo Zhang. "A Novel Extraction Method for Wildlife Monitoring Images with Wireless Multimedia Sensor Networks (WMSNs)." Applied Sciences 9, no. 11 (2019): 2276. http://dx.doi.org/10.3390/app9112276.

Full text
Abstract:
In remote areas, wireless multimedia sensor networks (WMSNs) have limited energy, and the data processing of wildlife monitoring images always suffers from energy consumption limitations. Generally, only part of each wildlife image is valuable. Therefore, the above mentioned issue could be avoided by transmitting the target area. Inspired by this transport strategy, in this paper, we propose an image extraction method with a low computational complexity, which can be adapted to extract the target area (i.e., the animal) and its background area according to the characteristics of the image pixels. Specifically, we first reconstruct a color space model via a CIELUV (LUV) color space framework to extract the color parameters. Next, according to the importance of the Hermite polynomial, a Hermite filter is utilized to extract the texture features, which ensures the accuracy of the split extraction of wildlife images. Then, an adaptive mean-shift algorithm is introduced to cluster texture features and color space information, realizing the extraction of the foreground area in the monitoring image. To verify the performance of the algorithm, a demonstration of the extraction of field-captured wildlife images is presented. Further, we conduct a comparative experiment with N-cuts (N-cuts), the existing aggregating super-pixels (SAS) algorithm, and the histogram contrast saliency detection (HCS) algorithm. A comparison of the results shows that the proposed algorithm for monitoring image target area extraction increased the average pixel accuracy by 11.25%, 5.46%, and 10.39%, respectively; improved the relative limit measurement accuracy by 1.83%, 5.28%, and 12.05%, respectively; and increased the average mean intersection over the union by 7.09%, 14.96%, and 19.14%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
16

Singh, Gurleen, and Sukhpreet Kaur. "Combination of Brightness Preserving Bi-Histogram Equalization and Discrete Wavelet Transform using LUV Color Space for Image Enhancement." International Journal of Computer Applications 148, no. 13 (2016): 26–30. http://dx.doi.org/10.5120/ijca2016911284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ndayikengurukiye, Didier, and Max Mignotte. "Salient Object Detection by LTP Texture Characterization on Opposing Color Pairs under SLICO Superpixel Constraint." Journal of Imaging 8, no. 4 (2022): 110. http://dx.doi.org/10.3390/jimaging8040110.

Full text
Abstract:
The effortless detection of salient objects by humans has been the subject of research in several fields, including computer vision, as it has many applications. However, salient object detection remains a challenge for many computer models dealing with color and textured images. Most of them process color and texture separately and therefore implicitly consider them as independent features which is not the case in reality. Herein, we propose a novel and efficient strategy, through a simple model, almost without internal parameters, which generates a robust saliency map for a natural image. This strategy consists of integrating color information into local textural patterns to characterize a color micro-texture. It is the simple, yet powerful LTP (Local Ternary Patterns) texture descriptor applied to opposing color pairs of a color space that allows us to achieve this end. Each color micro-texture is represented by a vector whose components are from a superpixel obtained by the SLICO (Simple Linear Iterative Clustering with zero parameter) algorithm, which is simple, fast and exhibits state-of-the-art boundary adherence. The degree of dissimilarity between each pair of color micro-textures is computed by the FastMap method, a fast version of MDS (Multi-dimensional Scaling) that considers the color micro-textures’ non-linearity while preserving their distances. These degrees of dissimilarity give us an intermediate saliency map for each RGB (Red–Green–Blue), HSL (Hue–Saturation–Luminance), LUV (L for luminance, U and V represent chromaticity values) and CMY (Cyan–Magenta–Yellow) color space. The final saliency map is their combination to take advantage of the strength of each of them. The MAE (Mean Absolute Error), MSE (Mean Squared Error) and Fβ measures of our saliency maps, on the five most used datasets show that our model outperformed several state-of-the-art models. Being simple and efficient, our model could be combined with classic models using color contrast for a better performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Yi, and Lihong Xu. "Unsupervised segmentation of greenhouse plant images based on modified Latent Dirichlet Allocation." PeerJ 6 (June 28, 2018): e5036. http://dx.doi.org/10.7717/peerj.5036.

Full text
Abstract:
Agricultural greenhouse plant images with complicated scenes are difficult to precisely manually label. The appearance of leaf disease spots and mosses increases the difficulty in plant segmentation. Considering these problems, this paper proposed a statistical image segmentation algorithm MSBS-LDA (Mean-shift Bandwidths Searching Latent Dirichlet Allocation), which can perform unsupervised segmentation of greenhouse plants. The main idea of the algorithm is to take advantage of the language model LDA (Latent Dirichlet Allocation) to deal with image segmentation based on the design of spatial documents. The maximum points of probability density function in image space are mapped as documents and Mean-shift is utilized to fulfill the word-document assignment. The proportion of the first major word in word frequency statistics determines the coordinate space bandwidth, and the spatial LDA segmentation procedure iteratively searches for optimal color space bandwidth in the light of the LUV distances between classes. In view of the fruits in plant segmentation result and the ever-changing illumination condition in greenhouses, an improved leaf segmentation method based on watershed is proposed to further segment the leaves. Experiment results show that the proposed methods can segment greenhouse plants and leaves in an unsupervised way and obtain a high segmentation accuracy together with an effective extraction of the fruit part.
APA, Harvard, Vancouver, ISO, and other styles
19

Nindam, Somsawut, Seung-Hoon Na, and Hyo Jong Lee. "MultiFusedNet: A Multi-Feature Fused Network of Pretrained Vision Models via Keyframes for Student Behavior Classification." Applied Sciences 14, no. 1 (2023): 230. http://dx.doi.org/10.3390/app14010230.

Full text
Abstract:
This research proposes a deep learning method for classifying student behavior in classrooms that follow the professional learning community teaching approach. We collected data on five student activities: hand-raising, interacting, sitting, turning around, and writing. We used the sum of absolute differences (SAD) in the LUV color space to detect scene changes. The K-means algorithm was then applied to select keyframes using the computed SAD. Next, we extracted features using multiple pretrained deep learning models from the convolutional neural network family. The pretrained models considered were InceptionV3, ResNet50V2, VGG16, and EfficientNetB7. We leveraged feature fusion, incorporating optical flow features and data augmentation techniques, to increase the necessary spatial features of selected keyframes. Finally, we classified the students’ behavior using a deep sequence model based on the bidirectional long short-term memory network with an attention mechanism (BiLSTM-AT). The proposed method with the BiLSTM-AT model can recognize behaviors from our dataset with high accuracy, precision, recall, and F1-scores of 0.97, 0.97, and 0.97, respectively. The overall accuracy was 96.67%. This high efficiency demonstrates the potential of the proposed method for classifying student behavior in classrooms.
APA, Harvard, Vancouver, ISO, and other styles
20

"Framework for Color and Texture Feature Fusion in Content Based Image Retrieval using Block Truncation Coding with Color Spaces." International Journal of Engineering and Advanced Technology 9, no. 3 (2020): 769–74. http://dx.doi.org/10.35940/ijeat.c5242.029320.

Full text
Abstract:
With tremendous growth in social media and digital technologies, generation, storing and transfer of huge amount of information over the internet is on the rise. Images or visual mode of communication have been prevailing and widely accepted as a mode of communication since ages. And with the growth of internet, the rate at which images are generated is growing exponentially. But the methods used to retrieve images are still very slow and inefficient, compared to the rate of increase in image databases. To cope up with this explosive increase in images, this information age has seen huge research advancement in Content Based Image Retrieval (CBIR). CBIR systems provide a way of utilizing the 3 major ways in which content is portrayed in images, those are shape, texture and color. In CBIR system, features are extracted from query image and similarity is found with features stored in database for retrieval. This provides an objective way of image retrieval, which is more efficient compared to subjective human annotation. Application specific CBIR systems have been developed and perform really well, but Generic CBIR systems are still under developed. Block Truncation Coding (BTC) has been chosen as a feature extractor. BTC applied directly on input image provides color content-based features of image and BTC applied after applying LBP on the image provide texture content-based features of image. Previous work consists of either color, shape or texture, but usage of more than one descriptor is still in research and might give better performance. The paper presents framework for color and texture feature fusion in content-based image retrieval using block truncation coding with color spaces. Experimentation is carried out on Wang Dataset of 1000 images consisting of 10 classes. Each class has 100 images in it. Obtained results have shown performance improvement using fusion of BTC extracted color features and texture features extracted with BTC applied on Local Binary Patterns (LBP). Conversion of color space from RGB to LUV is done using Kekre's LUV.
APA, Harvard, Vancouver, ISO, and other styles
21

Kekre, H. B., Dr Sudeep, Sanchit Khandelwal, Karan Dhamejani, and Adnan Azmi. "Improved Face Recognition with Multilevel BTC using Kekre’s LUV Color Space." International Journal of Advanced Computer Science and Applications 3, no. 1 (2012). http://dx.doi.org/10.14569/ijacsa.2012.030124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Dr., H. B. Kekre, and D. Thepade Sudeep. "Creating the Color Panoramic View using Medley of Grayscale and Color Partial Images." International Journal of Information, Control and Computer Sciences 1.0, no. 5 (2008). https://doi.org/10.5281/zenodo.1330983.

Full text
Abstract:
Panoramic view generation has always offered novel and distinct challenges in the field of image processing. Panoramic view generation is nothing but construction of bigger view mosaic image from set of partial images of the desired view. The paper presents a solution to one of the problems of image seascape formation where some of the partial images are color and others are grayscale. The simplest solution could be to convert all image parts into grayscale images and fusing them to get grayscale image panorama. But in the multihued world, obtaining the colored seascape will always be preferred. This could be achieved by picking colors from the color parts and squirting them in grayscale parts of the seascape. So firstly the grayscale image parts should be colored with help of color image parts and then these parts should be fused to construct the seascape image. The problem of coloring grayscale images has no exact solution. In the proposed technique of panoramic view generation, the job of transferring color traits from reference color image to grayscale image is done by palette based method. In this technique, the color palette is prepared using pixel windows of some degrees taken from color image parts. Then the grayscale image part is divided into pixel windows with same degrees. For every window of grayscale image part the palette is searched and equivalent color values are found, which could be used to color grayscale window. For palette preparation we have used RGB color space and Kekre-s LUV color space. Kekre-s LUV color space gives better quality of coloring. The searching time through color palette is improved over the exhaustive search using Kekre-s fast search technique. After coloring the grayscale image pieces the next job is fusion of all these pieces to obtain panoramic view. For similarity estimation between partial images correlation coefficient is used.
APA, Harvard, Vancouver, ISO, and other styles
23

"Video Frame Illumination Inconsistency Reduction using CLAHE with Kekre’s LUV Color Space." International Journal of Engineering and Advanced Technology 9, no. 3 (2020): 620–24. http://dx.doi.org/10.35940/ijeat.c5322.029320.

Full text
Abstract:
Visual frame quality is of utmost significance and is relevant in numerous computer vision applications such as object detection, video surveillance, optical motion capture, multimedia and human computer interface. Under controlled or uncontrolled environment, the video visual frame quality gets affected due to illumination variations. This may further hamper the interpretability and may lead to significant loss of information for background modeling. An excellent background model can enhance good visual perception. In this work, local enhancement technique with improved background modeling, Clipped Adaptive Histogram Equalization (CLAHE) is explored with Kekre’s LUV color space to reduce the illumination inconsistency especially with darker set of video frames and a significant improved average entropy of 7.7225 has been obtained, which is higher than the existing explored variations of CLAHE. .
APA, Harvard, Vancouver, ISO, and other styles
24

"Image Retrieval with Fusion of Thepade’s Sorted Block Truncation Coding n-ary based Color and Local Binary Pattern based Texture Features with Different Color Places." International Journal of Innovative Technology and Exploring Engineering 9, no. 5 (2020): 28–34. http://dx.doi.org/10.35940/ijitee.e1963.039520.

Full text
Abstract:
In these years, there has been a gigantic growth in the generation of data. Innovations such as the Internet, social media and smart phones are the facilitators of this information boom. Since ancient times images were treated as an effective mode of communication. Even today most of the data generated is image data. The technology for capturing, storing and transferring images is well developed but efficient image retrieval is still a primitive area of research. Content Based Image Retrieval (CBIR) is one such area where lot of research is still going on. CBIR systems rely on three aspects of the image content namely texture, shape and color. Application specific CBIR systems are effective whereas Generic CBIR systems are being explored. Previously, descriptors are used to extract shape, color or texture content features, but the effect of using more than one descriptor is under research and may yield better results. The paper presents the fusion of TSBTC n-ary (Thepade's Sorted n-ary Block Truncation Coding) Global Color Features and Local Binary Pattern (LBP) Local Texture Features in Content Based Image with Different Color Places TSBTC n-ary devises global color features from an image. It is a faster and better technique compared to Block Truncation Coding. It is also rotation and scale invariant. When applied on an image TSBTC n-ary gives a feature vector based on the color space, if TSBTC n-ary is applied on the obtained LBP (Local Binary Patterns) of the image color planes, the feature vector obtained is be based on local texture content. Along with RGB, the Luminance chromaticity color space like YCbCr and Kekre’s LUV are also used in experimentation of proposed CBIR techniques. Wang dataset has been used for exploration of proposed method. It consists of 1000 images (10 categories having 100 images each). Obtained results have shown performance improvement using fusion of BTC extracted global color features and local texture features extracted with TSBTC n-ary applied on Local Binary Patterns (LBP).
APA, Harvard, Vancouver, ISO, and other styles
25

Tamburo, Robert. "RGB Image Color Space Transformations." Insight Journal, December 10, 2010. http://dx.doi.org/10.54294/sn4ute.

Full text
Abstract:
This paper describes a set of pixel accessors that transform RGB pixel values to a different color space. Accessors for the HSI, XYZ, Yuv, YUV, HSV, Lab, Luv, HSL, CMY, and CMYK color spaces are provided here. This paper is accompanied with source code for the pixel accessors and test, test images and parameters, and expected output images.Note: Set() methods are incorrect. Will provide revision by 12.17.2010.
APA, Harvard, Vancouver, ISO, and other styles
26

Al-Jasim, Ali Adnan N., Taghreed Abdulhameed Naji, and Auday H. Shaban. "The Effect of Using the Different Satellite Spatial Resolution on the Fusion Technique." Iraqi Journal of Science, September 30, 2022, 4131–41. http://dx.doi.org/10.24996/ijs.2022.63.9.40.

Full text
Abstract:
Bilinear interpolation and use of perceptual color spaces (HSL, HSV, LAB, and LUV) fusion techniques are presented to improve spatial and spectral characteristics of the multispectral image that has a low resolution to match the high spatial resolution of a panchromatic image for different satellites image data (Orbview-3 and Landsat-7) for the same region. The Signal-to-Noise Ratio (SNR) fidelity criterion for achromatic information has been calculated, as well as the mean color-shifting parameters that computed the ratio of chromatic information loss of the RGB compound inside each pixel to evaluate the quality of the fused images. The results showed the superiority of HSL color space to fuse images over the rest of the spaces, as it recorded the highest SNR and the lowest mean color-shifting. The quality of the fused images using Lab color space was not affected by the type of intermediate algorithm used for XYZ space, unlike LUV color space. The XYZ algorithm type affected the fused image results, which recorded the worst consequences. It is noted that the computational time taken increased exponentially with the difference in spatial resolution between the fused images.
APA, Harvard, Vancouver, ISO, and other styles
27

Cheshenko, Dmytro, and Olga Matsuga. "COMPARATIVE ANALYSIS OF DEEP LEARNING MODELS FOR PLANT DISEASE CLASSIFICATION IN DIFFERENT COLOR SPACES." International scientific and technical conference Information technologies in metallurgy and machine building, June 2, 2025, 635–40. https://doi.org/10.34185/1991-7848.itmm.2025.01.114.

Full text
Abstract:
This study presents a comparative analysis of the impact of different color spaces on the accuracy of plant disease classification using convolutional neural networks. Three model architectures were trained in four color spaces – RGB, HSV, LUV, and LAB – resulting in a total of 12 models. The experiments were conducted on a dataset containing over 79,000 original images across 88 classes. Classification accuracy was evaluated on both training and test sets. The results indicate that classification performance depends not only on the model architecture but also on the chosen color space. In particular, LAB showed slight advantages in models with fewer parameters, while RGB consistently performed well in more complex models. HSV and LUV generally resulted in lower accuracy. These findings can contribute to improving plant disease diagnosis systems and may also be applicable in other fields such as metallurgy for enhancing the accuracy of defect detection on metal surface images.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhong, Xiaopin, Junjia Guo, and Yuanlong Deng. "Pixel-Classification-Based Reticulocyte Detection in Blood-Smear Microscopy Images." Journal of Medical Devices 13, no. 4 (2019). http://dx.doi.org/10.1115/1.4043919.

Full text
Abstract:
Abstract Methods for reticulocyte identification and counting based on digital image processing technology are rare. In this paper, we proposed a pixel-based reticulocyte identification method for blood micrographs. This approach not only addresses slowness of manual methods but also largely alleviates the susceptibility of flow-based methods to nucleic acid. The key is to extract pixel-level features. One color feature is extracted from three color spaces, namely RGB, HSI, and LUV. Three texture features called Gabor features, the gray-level co-occurrence matrix, and local contrast pattern are extracted from the Cr channel of the YCbCr color space, forming six features. Subsequently, the recognition effects of each combination of color and texture were compared, and the combination of Gabor texture features and LUV color features was selected. Then, a support vector machine (SVM) classifier was used to classify the pixel-level features, and the RNA-staining area was detected. Based on the location, number, and area of the region, whether the target cell is reticulocyte can be determined. The precision of this method for reticulocytes was 98.4%, recall was 98.0%, and the F1 measure was 0.982, indicating its usefulness for automation equipment.
APA, Harvard, Vancouver, ISO, and other styles
29

Sajith, Kecheril S., Venkataraman D, Suganthi J, and Sujathan K. "SEGMENTATION OF LUNG GLANDULAR CELLS USING MULTIPLE COLOR SPACES." June 30, 2012. https://doi.org/10.5121/ijcsea.2012.2315.

Full text
Abstract:
Early detection of lung cancer is a challenging problem, the world faces today. Prior to classify glandular cells as malignant or benign a reliable segmentation technique is required. In this paper we present a novel lung glandular cell segmentation technique. The technique uses a combination of multiple color spaces and various clustering algorithms to automatically find the best possible segmentation result. Unsupervised clustering methods of K-means and Fuzzy C-means were used on multiple color spaces such as HSV, LAB, LUV, xyY. Experimental results of segmentation using various color spaces are provided to show the performance of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
30

Qiao, Man, Mingfeng Liu, Zong-Fei Lv, et al. "Balmer Decrement and IRX Break in Tracing Dust Attenuation at Scales of Individual Star-forming Regions in NGC 628." Research in Astronomy and Astrophysics, May 30, 2025. https://doi.org/10.1088/1674-4527/addf03.

Full text
Abstract:
Abstract We investigate the relationships between infrared excess (IRX = LIR/LUV) and Balmer decrement (Hα/Hβ) as indicators of dust attenuation for 609 H II regions at scales of ∽ 50−200 pc in NGC 628, utilizing data from AstroSat, James Webb Space Telescope (JWST) and Multi Unit Spectroscopic Explorer (MUSE). Our findings indicate that about three fifths of the sample H II regions reside within the regime occupied by local star-forming galaxies (SFGs) along the dust attenuation correlation described by their corresponding color excess parameters E(B −V )IRX = 0.51E(B −V )Hα/Hβ. Nearly 27% of the sample exhibits E(B −V )IRX > E(B −V )Hα/Hβ, while a small fraction (∼ 13%) displays significantly lower E(B −V )IRX compared to E(B −V )Hα/Hβ. These results suggest that the correlation between the two dust attenuation indicators no longer holds for spatially resolved H II regions. Furthermore, the ratio of E(B −V )IRX to E(B −V )Hα/Hβ remains unaffected by various physical parameters of the H II regions, including star formation rate (SFR), SFR surface density, infrared luminosity (LIR), LIR surface density, stellar mass, gas-phase metallicity, circularized radius, and the distance to galactic center. We argue that the ratio is primarily influenced by the evolution of surrounding interstellar medium (ISM) of the star-forming regions, transitioning from an early dense and thick phase to the late blown-away stage.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!