Academic literature on the topic 'Kekre's LUV color space'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kekre's LUV color space.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kekre's LUV color space"

1

Deepa, Abin, and D. Thepade Sudeep. "Video Frame Illumination Inconsistency Reduction using CLAHE with Kekre's LUV Color Space." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (2020): 620–24. https://doi.org/10.35940/ijeat.C5322.029320.

Full text
Abstract:
Visual frame quality is of utmost significance and is relevant in numerous computer vision applications such as object detection, video surveillance, optical motion capture, multimedia and human computer interface. Under controlled or uncontrolled environment, the video visual frame quality gets affected due to illumination variations. This may further hamper the interpretability and may lead to significant loss of information for background modeling. An excellent background model can enhance good visual perception. In this work, local enhancement technique with improved background modeling, Clipped Adaptive Histogram Equalization (CLAHE) is explored with Kekre’s LUV color space to reduce the illumination inconsistency especially with darker set of video frames and a significant improved average entropy of 7.7225 has been obtained, which is higher than the existing explored variations of CLAHE.
APA, Harvard, Vancouver, ISO, and other styles
2

Sudeep, D. Thepade, Awhad Rohan, and Khandelwal Prakhar. "Framework for Color and Texture Feature Fusion in Content Based Image Retrieval using Block Truncation Coding with Color Spaces." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (2021): 769–74. https://doi.org/10.35940/ijeat.C5242.029320.

Full text
Abstract:
With tremendous growth in social media and digital technologies, generation, storing and transfer of huge amount of information over the internet is on the rise. Images or visual mode of communication have been prevailing and widely accepted as a mode of communication since ages. And with the growth of internet, the rate at which images are generated is growing exponentially. But the methods used to retrieve images are still very slow and inefficient, compared to the rate of increase in image databases. To cope up with this explosive increase in images, this information age has seen huge research advancement in Content Based Image Retrieval (CBIR). CBIR systems provide a way of utilizing the 3 major ways in which content is portrayed in images, those are shape, texture and color. In CBIR system, features are extracted from query image and similarity is found with features stored in database for retrieval. This provides an objective way of image retrieval, which is more efficient compared to subjective human annotation. Application specific CBIR systems have been developed and perform really well, but Generic CBIR systems are still under developed. Block Truncation Coding (BTC) has been chosen as a feature extractor. BTC applied directly on input image provides color content-based features of image and BTC applied after applying LBP on the image provide texture content-based features of image. Previous work consists of either color, shape or texture, but usage of more than one descriptor is still in research and might give better performance. The paper presents framework for color and texture feature fusion in content-based image retrieval using block truncation coding with color spaces. Experimentation is carried out on Wang Dataset of 1000 images consisting of 10 classes. Each class has 100 images in it. Obtained results have shown performance improvement using fusion of BTC extracted color features and texture features extracted with BTC applied on Local Binary Patterns (LBP). Conversion of color space from RGB to LUV is done using Kekre's LUV.
APA, Harvard, Vancouver, ISO, and other styles
3

B., Rajesh, and Archana B. "Segmentation of Optic Disc and Optic Cup to Calculate CDR using Kekre’s LUV Color Space for Detecting Glaucoma." International Journal of Computer Applications 127, no. 17 (2015): 7–11. http://dx.doi.org/10.5120/ijca2015906712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xue, Yongan, Jinling Zhao, and Mingmei Zhang. "A Watershed-Segmentation-Based Improved Algorithm for Extracting Cultivated Land Boundaries." Remote Sensing 13, no. 5 (2021): 939. http://dx.doi.org/10.3390/rs13050939.

Full text
Abstract:
To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Sandhopi, Lukman Zaman P.C.S.W, and Yosi Kristian. "Identifikasi Motif Jepara pada Ukiran dengan Memanfaatkan Convolutional Neural Network." Jurnal Nasional Teknik Elektro dan Teknologi Informasi 9, no. 4 (2020): 403–13. http://dx.doi.org/10.22146/jnteti.v9i4.541.

Full text
Abstract:
Semakin berkembang motif ukiran, semakin beragam bentuk dan variasinya. Hal ini menyulitkan dalam menentukan suatu ukiran bermotif Jepara. Pada makalah ini, metode transfer learning dengan FC yang dikembangkan dimanfaatkan untuk mengidentifikasi motif khas Jepara pada suatu ukiran. Dataset dibedakan menjadi tiga color space, yaitu LUV, RGB, dan YcrCb. Selain itu, sliding window, non-max suppression, dan heat maps dimanfaatkan untuk proses penelusuran area objek ukiran dan pengidentifikasian motif Jepara. Hasil pengujian dari semua bobot menunjukkan bahwa Xception pada klasifikasi motif Jepara memiliki nilai akurasi tertinggi, yaitu 0,95, 0,95, dan 0,94 untuk masing-masing dataset color space LUV, RGB, dan YCrCb. Namun, ketika semua bobot model tersebut diterapkan pada sistem identifikasi motif Jepara, ResNet50 mampu mengungguli semua jaringan dengan nilai persentase identifikasi motif sebesar 84%, 79%, dan 80%, untuk masing-masing color space LUV, RGB, dan YCrCb. Hasil ini membuktikan bahwa sistem mampu membantu dalam proses menentukan suatu ukiran, termasuk ke dalam ukiran Jepara atau bukan, dengan mengidentifikasi motif-motif khas Jepara yang terdapat dalam ukiran.
APA, Harvard, Vancouver, ISO, and other styles
6

Thepade, Sudeep D., Shalakha Vijaykumar Bang, Rik Das, and Zahid Akhtar. "Machine learning-based land usage identification using Haralick texture features of aerial images with Kekre's LUV colour space." International Journal of Computational Science and Engineering 25, no. 5 (2022): 562. http://dx.doi.org/10.1504/ijcse.2022.126255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sari, Filiz, and Ali Burak Ulas. "Deep Learning Application in Detecting Glass Defects with Color Space Conversion and Adaptive Histogram Equalization." Traitement du Signal 39, no. 2 (2022): 731–36. http://dx.doi.org/10.18280/ts.390238.

Full text
Abstract:
Manually detecting defects on the surfaces of glass products is a slow and time-consuming process in the quality control process, so computer-aided systems, including image processing and machine learning techniques are used to overcome this problem. In this study, scratch and bubble defects of the jar, photographed in the studio with a white matte background and a -60° peak angle, are investigated with the Yolo-V3 deep learning technique. Obtained performance is 94.65% for the raw data. Color space conversion (CSC) techniques, HSV and CIE-Lab Luv, are applied to the resulting images. V channels select for preprocessing. While the HSV method decreases the performance, an increase has been observed in the CIE-Lab Luv method. With the CIE-Lab Luv method, to which is applied the adaptive histogram equalization, the maximum recall, precision, and F1-score reach above 97%. Also, Yolo-V3 compared with the Faster R-CNN, it is observed that Yolo-V3 gave better results in all analyzes, and the highest overall accuracy is achieved in both methods when adaptive histogram equalization is applied to CIE-Lab Luv.
APA, Harvard, Vancouver, ISO, and other styles
8

Elizabeth, C. P. Blesslin, and K. Usha Kingsly Devi. "Spectral Clustering of Images in LUV Color Space by Spatial-Color Pixel Classification." International Journal of Computer Applications 3, no. 9 (2010): 1–5. http://dx.doi.org/10.5120/771-1082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Song, De Rui, Dao Yan Xu, and Li Li. "A Novel Edge Detection Algorithm of Color Image." Applied Mechanics and Materials 446-447 (November 2013): 976–80. http://dx.doi.org/10.4028/www.scientific.net/amm.446-447.976.

Full text
Abstract:
This paper proposes a novel algorithm of edge detection using LUV color space. Firstly, according to peer group filtering (PGF), a nonlinear algorithm for image smoothing and impulse noise removal in color image is used. Secondly, color image edges in an image are obtained automatically by combining an improved isotropic edge detector and a fast entropy threshold technique. Thirdly, according to color distance between the pixel and its eight neighbor-pixels, color image edges can further be detected. Finally, the experiment demonstrates the outcome of proposed algorithm in color image edge detection.
APA, Harvard, Vancouver, ISO, and other styles
10

Turhal, Umit Cigdem, and Can Dagdelen. "Tensor based statistical segmentation of green vegetation canopy images." Acta Scientiarum. Technology 44 (January 12, 2022): e55708. http://dx.doi.org/10.4025/actascitechnol.v43i1.55708.

Full text
Abstract:
The increase of developments of electronic and computer has been resulted with the commonly use of these technologies in everyday life in many applications. One of these applications is emerged in agricultural applications as precision agriculture studies. The use of technology in agriculture has lots of benefits such as energy saving, yield increase, time saving and etc. In this study, a novel learning based, pixel-wise segmentation method which uses Common Vector Approach (CVA) for the first time in the literature for segmentation is proposed. In the proposed method, first of all color regions belong to both vegetation and soil are manually cropped and then three different color space representations such as HSV, Lab and Luv of RGB images for each color region are encoded as 3rd color tensors. Then by unfolding the color tensor in the mode-3 direction, 2-D color matrix is obtained. The columns of this 2-D color matrix are the dimensional vectors that include (H, S a, b, u, v) components of HSV, Lab and Luv color spaces of an image pixel, and each column vector is accepted as an object. By applying CVA in object space consisting of column vectors of 2-D color matrix, a common color vector which represents common properties of that color region is obtained and used for segmentation purposes. In the experimental studies two different datasets proposed for open computer vision tasks in precision agriculture before in the literature are used. Three different experimental studies are performed for different dataset combinations in terms of training set and the test set. The performance of the proposed method has been compared with the performance of a deep learning method, Convolutional Neural Networks (CNN) based semantic segmentation method. In all of the three experimental studies proposed method achieves extremely high performance according to CNN, especially in the second and in the third experimental studies where dataset combinations include the two of the datasets.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Kekre's LUV color space"

1

Ragb, Hussin K., and Vijayan K. Asari. "Local Phase Features in Chromatic Domain for Human Detection." In Human Performance Technology. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8356-1.ch033.

Full text
Abstract:
In this paper, a new descriptor based on phase congruency concept and LUV color space features is presented. Since the phase of the signal conveys more information regarding signal structure than the magnitude and the indispensable quality of the color in describing the world around us, the proposed descriptor can precisely identify and localize image features over the gradient based techniques, especially in the regions affected by illumination changes. The proposed features can be formed by extracting the phase congruency information for each pixel in the three-color image channels. The maximum phase congruency values are selected from the corresponding color channels. Histograms of the phase congruency values of the local regions in the image are computed with respect to its orientation. These histograms are concatenated to construct the proposed descriptor. Results of the experiments performed on the proposed descriptor show that it has better detection performance and lower error rates than a set of the state of the art feature extraction methodologies.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Kekre's LUV color space"

1

Kekre, H. B., and Sudeep D. Thepade. "Improving `Color to Gray and Back' using Kekre's LUV Color Space." In 2009 IEEE International Advance Computing Conference (IACC 2009). IEEE, 2009. http://dx.doi.org/10.1109/iadcc.2009.4809189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kekre, H. B., Dhirendra Mishra, and Rakhee S. Saboo. "Comparison of image fusion techniques in RGB & Kekre's LUV color space." In 2015 International Conference on Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE). IEEE, 2015. http://dx.doi.org/10.1109/ablaze.2015.7154979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Thepade, Sudeep D., Jaya H. Dewan, Divya Pritam, and Ratnesh Chaturvedi. "Fire Detection System Using Color and Flickering Behaviour of Fire with Kekre's LUV Color Space." In 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA). IEEE, 2018. http://dx.doi.org/10.1109/iccubea.2018.8697454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abin, Deepa, and Sudeep D. Thepade. "Illumination Inconsistency Reduction in Video Frames using DSIHE with Kekre’s LUV Color Space." In 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV). IEEE, 2021. http://dx.doi.org/10.1109/icicv50876.2021.9388567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pardhi, Pravin M., and Sudeep D. Thepade. "Enhancement of Nighttime Image Visibility Using Wavelet Fusion of Equalized Color Channels and Luminance with Kekre’s LUV Color Space." In 2020 IEEE Bombay Section Signature Conference (IBSSC). IEEE, 2020. http://dx.doi.org/10.1109/ibssc51096.2020.9332180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stalmeier, Peep F. M., and Charles M. M. de Weert. "Scaling of large color differences." In OSA Annual Meeting. Optica Publishing Group, 1988. http://dx.doi.org/10.1364/oam.1988.mh6.

Full text
Abstract:
Similarity data for very large color differences, obtained with the method of triads, are presented. The stimulus consisted of three separated fields subtending 1.06°, each field in a different color. The colors varied in hue, saturation, and luminance. Approximately 73.000 judgments are collected. The data are analyzed using the Luv 1976 color distance diagram.1 In general, the data accord satisfactorily with the Luv color space provided that weighting factors are inserted in the distance formula.
APA, Harvard, Vancouver, ISO, and other styles
7

Xuezhi Xiang, Yu Peng, Xuezhi Xiang, and Lei Zhang. "A method of optical flow computation based on LUV color space." In 2009 International Conference on Test and Measurement (ICTM). IEEE, 2009. http://dx.doi.org/10.1109/ictm.2009.5413027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pritam, Divya, and Jaya H. Dewan. "Detection of fire using image processing techniques with LUV color space." In 2017 2nd International Conference for Convergence in Technology (I2CT). IEEE, 2017. http://dx.doi.org/10.1109/i2ct.2017.8226309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Remmach, Halima, Raja Touahni, and Abderrahmane Sbihi. "Spatial-color mode detection in UV pairwise projection of the CIE LUV color space." In 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT). IEEE, 2020. http://dx.doi.org/10.1109/iciot48696.2020.9089516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rahimzadeganasl, Alireza, and Elif Sertel. "Automatic building detection based on CIE LUV color space using very high resolution pleiades images." In 2017 25th Signal Processing and Communications Applications Conference (SIU). IEEE, 2017. http://dx.doi.org/10.1109/siu.2017.7960711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!