To see the other types of publications on this topic, follow the link: RGB to gray scale.

Journal articles on the topic 'RGB to gray scale'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'RGB to gray scale.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pierre, Fabien, Jean-François Aujol, Aurélie Bugeau, Gabriele Steidl, and Vinh-Thong Ta. "Variational Contrast Enhancement of Gray-Scale and RGB Images." Journal of Mathematical Imaging and Vision 57, no. 1 (2016): 99–116. http://dx.doi.org/10.1007/s10851-016-0670-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qomariyah, Nurul, Rahadi Wirawan, Ni Kadek Nova Anggarani, Laili Mardiana, and Kasnawi Alhadi. "Karakteristik Gaharu Grynops Vertegii (Gilg.) Domke Berdasarkan Analisis Sebaran Gray scale Level." EIGEN MATHEMATICS JOURNAL 1, no. 1 (2019): 44. http://dx.doi.org/10.29303/emj.v1i1.27.

Full text
Abstract:
Agarwood Grynops Vertegii (Gilg.) Domke is a type of agarwood that is widely cultivated in the NTB area. The economic value of Agarwood is directly proportional to its quality. Color is one of the physical parameters to determine the quality of agarwood. The purpose of this study is to classify Grynops Vertegii (Gilg.) Agarwood based on the distribution of gray scale level using image processing. The method used is image processing based on gray scale level, Agarwood is divided into four classes based on the dominant color, in this study all samples divided into four classes: A, B, C, and D. Image in RGB converted in to gray scale images then processed in histogram to determine the distribution of the degree of gray scale and its intensity. From the results of image processing it can be seen that there is a shift in the peak position, the difference in the gray scale value, and the curve width. Gray scale values in each class A, B, C, and D respectively are 26,35, 62 and 121 with intensity value at peak positions respectively are 43300, 42400, 30350, 31750. Small gray scale values indicated that agarwood has a high black density and vice versa, while the peak position shows the dominant gray scale value in each class.
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Xin Ying, and Xiu Ping Zhao. "A Robust Digital Watermarking Algorithm Based on Digital Image Security." Advanced Materials Research 174 (December 2010): 144–47. http://dx.doi.org/10.4028/www.scientific.net/amr.174.144.

Full text
Abstract:
Digital watermarking has been proposed as a way to claim the ownership of the source and owner of the digital image data. In this paper, A robust algorithm based on DCT region is proposed to improve the image security. The main transforming is based on DCT (Discrete Cosine Transform). The algorithm was processed in the MATLAB software. In this paper, images of gray scale and RGB color scale were researched respectively. For color images, in order to get the best image quality, the RGB scale was transformed to YcbCr scale. Then, the Y channel (brightness channel) was separated, in which the watermark was embedded and extracted. The results show that the algorithm embedded a certain size of black and white Bitmap image into gray and color images. The watermark can’t be seen by the naked eye. The robustness detection experiment was also carried out. The watermark can still be extracted after certain amount of tailoring, defacing, Gaussian noise, and format changes. The similarity is more than 0.7. It confirmed that the algorithm is highly robust.
APA, Harvard, Vancouver, ISO, and other styles
4

MARSZALEC, ELZBIETA, and MATTI PIETIKÄINEN. "SOME ASPECTS OF RGB VISION AND ITS APPLICATIONS IN INDUSTRY." International Journal of Pattern Recognition and Artificial Intelligence 10, no. 01 (1996): 55–72. http://dx.doi.org/10.1142/s0218001496000062.

Full text
Abstract:
RGB machine vision overlaps both colorimetry instrumentation and gray scale machine vision, and offers advantages over both. Proper calibration of the color camera which makes measurements independent of changes in illumination is essential for the reliability of color machine vision. A practical approach to on-line color camera calibration under unstable illumination conditions is presented and the performance of the procedure is evaluated. New potential applications of RGB machine vision are discussed from the perspective of physics-based vision and the calibration procedure developed here.
APA, Harvard, Vancouver, ISO, and other styles
5

MISHRA, D. C., HIMANI SHARMA, R. K. SHARMA, and NAVEEN KUMAR. "A FIRST CRYPTOSYSTEM FOR SECURITY OF TWO-DIMENSIONAL DATA." Fractals 25, no. 01 (2017): 1750011. http://dx.doi.org/10.1142/s0218348x17500116.

Full text
Abstract:
In this paper, we present a novel technique for security of two-dimensional data with the help of cryptography and steganography. The presented approach provides multilayered security of two-dimensional data. First layer security was developed by cryptography and second layer by steganography. The advantage of steganography is that the intended secret message does not attract attention to itself as an object of scrutiny. This paper proposes a novel approach for encryption and decryption of information in the form of Word Data (.doc file), PDF document (.pdf file), Text document, Gray-scale images, and RGB images, etc. by using Vigenere Cipher (VC) associated with Discrete Fourier Transform (DFT) and then hiding the data behind the RGB image (i.e. steganography). Earlier developed techniques provide security of either PDF data, doc data, text data or image data, but not for all types of two-dimensional data and existing techniques used either cryptography or steganography for security. But proposed approach is suitable for all types of data and designed for security of information by cryptography and steganography. The experimental results for Word Data, PDF document, Text document, Gray-scale images and RGB images support the robustness and appropriateness for secure transmission of these data. The security analysis shows that the presented technique is immune from cryptanalytic. This technique further provides security while decryption as a check on behind which RGB color the information is hidden.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Shipeng, Di Li, Chunhua Zhang, Jiafu Wan, and Mingyou Xie. "RGB-D Image Processing Algorithm for Target Recognition and Pose Estimation of Visual Servo System." Sensors 20, no. 2 (2020): 430. http://dx.doi.org/10.3390/s20020430.

Full text
Abstract:
This paper studies the control performance of visual servoing system under the planar camera and RGB-D cameras, the contribution of this paper is through rapid identification of target RGB-D images and precise measurement of depth direction to strengthen the performance indicators of visual servoing system such as real time and accuracy, etc. Firstly, color images acquired by the RGB-D camera are segmented based on optimized normalized cuts. Next, the gray scale is restored according to the histogram feature of the target image. Then, the obtained 2D graphics depth information and the enhanced gray image information are distort merged to complete the target pose estimation based on the Hausdorff distance, and the current image pose is matched with the target image pose. The end angle and the speed of the robot are calculated to complete a control cycle and the process is iterated until the servo task is completed. Finally, the performance index of this control system based on proposed algorithm is tested about accuracy, real-time under position-based visual servoing system. The results demonstrate and validate that the RGB-D image processing algorithm proposed in this paper has the performance in the above aspects of the visual servoing system.
APA, Harvard, Vancouver, ISO, and other styles
7

Nidhal, K. EL Abbadi, and Saleem Eman. "Automatic gray images colorization based on lab color space." Indonesian Journal of Electrical Engineering and Computer Science (IJEECS) 18, no. 3 (2020): 1501–9. https://doi.org/10.11591/ijeecs.v18.i3.pp1501-1509.

Full text
Abstract:
The colorization aims to transform a black and white image to a color image. This is a very hard issue and usually requiring manual intervention by the user to produce high-quality images free of artifact. The public problem of inserting gradients color to a gray image has no accurate method. The proposed method is fully automatic method. We suggested to use reference color image to help transfer colors from reference image to gray image. The reference image converted to Lab color space, while the gray scale image normalized according to the lightness channel L. The gray image concatenates with both a, and b channels before converting to RGB image. The results were promised compared with other methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Abbadi, Nidhal K. El, and Eman Saleem Razaq. "Automatic gray images colorization based on lab color space." Indonesian Journal of Electrical Engineering and Computer Science 18, no. 3 (2020): 1501. http://dx.doi.org/10.11591/ijeecs.v18.i3.pp1501-1509.

Full text
Abstract:
<p>The colorization aim to transform a black and white image to a color image. This is a very hard issue and usually requiring manual intervention by the user to produce high-quality images free of artifact. The public problem of inserting gradients color to a gray image has no accurate method. The proposed method is fully automatic method. We suggested to use reference color image to help transfer colors from reference image to gray image. The reference image converted to Lab color space, while the gray scale image normalized according to the lightness channel L. the gray image concatenate with both a, and b channels before converting to RGB image. The results were promised compared with other methods.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Jinjun, Hong Zhao, Chengying Shi, and Xiang Zhou. "A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space." Mathematical Problems in Engineering 2011 (2011): 1–14. http://dx.doi.org/10.1155/2011/202653.

Full text
Abstract:
A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD) is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
10

He, Xiao Qin, Jin Jun Li, and Xiao Yan Li. "Multi-Scale Stereo Analysis Based on Local Multi-Model Monogenic Image Feature Descriptors." Advanced Materials Research 433-440 (January 2012): 853–59. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.853.

Full text
Abstract:
A multi-scale method based on local multi-model monogenic image feature descriptors (LMFD) is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal and local mean colors in the multi-scale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Because the proposed feature descriptors contain local geometric, structure and color information, it is robust against noise and brightness change in feature matching and 3D reconstruction. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Dalui, Indrani, SurajitGoon, and Avisek Chatterjee. "A NEW APPROACH OF FRACTAL COMPRESSION USING COLOR IMAGE." International Journal of Engineering Technologies and Management Research 6, no. 6 (2020): 74–71. http://dx.doi.org/10.29121/ijetmr.v6.i6.2019.395.

Full text
Abstract:
Fractal image compression depends on self-similarity, where one segment of a image is like the other one segment of a similar picture. Fractal coding is constantly connected to grey level images. The simplest technique to encode a color image by gray- scale fractal image coding algorithm is to part the RGB color image into three Channels - red, green and blue, and compress them independently by regarding each color segment as a specific gray-scale image. The colorimetric association of RGB color pictures is examined through the calculation of the relationship essential of their three-dimensional histogram. For normal color images, as a typical conduct, the connection necessary is found to pursue a power law, with a non- integer exponent type of a given image. This conduct recognizes a fractal or multiscale self-comparable sharing of the colors contained, in average characteristic pictures. This finding of a conceivable fractal structure in the colorimetric association of regular images complement other fractal properties recently saw in their spatial association. Such fractal colorimetric properties might be useful to the characterization and demonstrating of natural images, and may add to advance in vision. The outcomes got demonstrate that the fractal-based compression for the color image fills in similarly with respect to the color image.
APA, Harvard, Vancouver, ISO, and other styles
12

Indrani, Dalui, SurajitGoon, and Chatterjee Avisek. "A NEW APPROACH OF FRACTAL COMPRESSION USING COLOR IMAGE." International Journal of Engineering Technologies and Management Research 6, no. 6 (2019): 74–81. https://doi.org/10.5281/zenodo.3251776.

Full text
Abstract:
Fractal image compression depends on self-similarity, where one segment of a image is like the other one segment of a similar picture. Fractal coding is constantly connected to grey level images. The simplest technique to encode a color image by gray- scale fractal image coding algorithm is to part the RGB color image into three Channels - red, green and blue, and compress them independently by regarding each color segment as a specific gray-scale image. The colorimetric association of RGB color pictures is examined through the calculation of the relationship essential of their three-dimensional histogram. For normal color images, as a typical conduct, the connection necessary is found to pursue a power law, with a non- integer exponent type of a given image. This conduct recognizes a fractal or multiscale self-comparable sharing of the colors contained, in average characteristic pictures. This finding of a conceivable fractal structure in the colorimetric association of regular images complement other fractal properties recently saw in their spatial association. Such fractal colorimetric properties might be useful to the characterization and demonstrating of natural images, and may add to advance in vision. The outcomes got demonstrate that the fractal-based compression for the color image fills in similarly with respect to the color image.
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Song, Yu Hou, Yuheng Shang, and Xin Zhong. "BPNN and CNN-based AI modeling of spreading and icing pattern of a water droplet impact on a supercooled surface." AIP Advances 12, no. 4 (2022): 045209. http://dx.doi.org/10.1063/5.0082568.

Full text
Abstract:
A water droplet impacting on a supercooled surface normally experiencing spreading and freezing is a complex process involving fluid flow, heat transfer, and phase change. We established two models to, respectively, predict the spreading dynamics of a water droplet impact on a supercooled surface and classify the icing patterns to predict the corresponding surface supercooling degree. Six important factors are used to characterize droplet spreading, including Reynolds number, Weber number, Ohnesorge number, surface supercooling degree, the maximum spreading factor, and the dimensionless maximum spreading time. A Back Propagation Neural Network model, including four inputs and two outputs, is established, containing a hidden layer with 15 neurons to perform the non-linear regression training on the spreading factors of 778 groups of an impact water droplet. The trained model is adopted to predict the spreading factors of 86 groups of a water droplet impact on the supercooled surface. The second model is developed to discern and classify the experimentally captured three different icing patterns. Different clustering methods are performed on 116 icing images, including gray-scale and red-green-blue (RGB) clustering. Then, two convolution neural network models of VGG-19 (Visual Geometry Group-19) and VGG-16 are established to classify, train, and test the icing images by gray-scale and RGB clustering methods. The K = 2 gray-scale clustering and the VGG-19 model exhibits the highest accuracy at 90.57%. The two models developed in this study can, respectively, predict the essential factors characterizing spreading dynamics of an impact droplet on a cold surface and predict surface supercooling degree based on an icing pattern.
APA, Harvard, Vancouver, ISO, and other styles
14

Sudharshan Duth, P., and M. Mary Deepa. "Color detection in RGB-modeled images using MAT LAB." International Journal of Engineering & Technology 7, no. 2.31 (2018): 29. http://dx.doi.org/10.14419/ijet.v7i2.31.13391.

Full text
Abstract:
This research work introduces a method of using color thresholds to identify two-dimensional images in MATLAB using the RGB Color model to recognize the Color preferred by the user in the picture. Methodologies including image color detection convert a 3-D RGB Image into a Gray-scale Image, at that point subtract the two pictures to obtain a 2-D black-and-white picture, filtering the noise picture elements using a median filter, detecting with a connected component mark digital pictures in the connected area and utilize the bounding box and its properties to calculate the metric for every marking area. In addition, the shade of the picture element is identified by examining the RGB value of every picture element present in the picture. Color Detection algorithm is executed utilizing the MATLAB Picture handling Toolkit. The result of this implementation can be used in as a bit of security applications such as spy robots, object tracking, Color-based object isolation, and intrusion detection.
APA, Harvard, Vancouver, ISO, and other styles
15

Seo, Jongwoong, Seungwook Son, Seunghyun Yu, Hwapyeong Baek, and Yongwha Chung. "Depth-Oriented Gray Image for Unseen Pig Detection in Real Time." Applied Sciences 15, no. 2 (2025): 988. https://doi.org/10.3390/app15020988.

Full text
Abstract:
With the increasing demand for pork, improving pig health and welfare management productivity has become a priority. However, it is impractical for humans to manually monitor all pigsties in commercial-scale pig farms, highlighting the need for automated health monitoring systems. In such systems, object detection is essential. However, challenges such as insufficient training data, low computational performance, and generalization issues in diverse environments make achieving high accuracy in unseen environments difficult. Conventional RGB-based object detection models face performance limitations due to brightness similarity between objects and backgrounds, new facility installations, and varying lighting conditions. To address these challenges, this study proposes a DOG (Depth-Oriented Gray) image generation method using various foundation models (SAM, LaMa, Depth Anything). Without additional sensors or retraining, the proposed method utilizes depth information from the testing environment to distinguish between foreground and background, generating depth background images and establishing an approach to define the Region of Interest (RoI) and Region of Uninterest (RoU). By converting RGB input images into the HSV color space and combining HSV-Value, inverted HSV-Saturation, and the generated depth background images, DOG images are created to enhance foreground object features while effectively suppressing background information. Experimental results using low-cost CPU and GPU systems demonstrated that DOG images improved detection accuracy (AP50) by up to 6.4% compared to conventional gray images. Moreover, DOG image generation achieved real-time processing speeds, taking 3.6 ms on a CPU, approximately 53.8 times faster than the GPU-based depth image generation time of Depth Anything, which requires 193.7 ms.
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Yu Hong, Hua Jing Zheng, Quan Jiang, and Zheng Ruan. "A Study on Preparation and Properties of a Full Color Organic Light-Emitting Diodes Display Device." Advanced Materials Research 396-398 (November 2011): 450–57. http://dx.doi.org/10.4028/www.scientific.net/amr.396-398.450.

Full text
Abstract:
A full color 2.2″ passive matrix organic light-emitting diodes (OLEDs) with 128 (RGB) * 160 pixels was developed. The display features that driving circuit can transform 18 bits gray-scale data from a PC to the OLED panel via a DVI channel. The size of the pixel was 240μm 240μm, while that of mono sub-pixel is 190μm 45μm. The lifetime of panel was estimated over 5000h because of the use of dual-scan driving technology, and the power consumption of the display was 300mw about when the average luminance of panel reach 40cd/m2.
APA, Harvard, Vancouver, ISO, and other styles
17

Cheng, Shi Jun, Hua Jing Zheng, Quan Jiang, and Gang Yang. "Preparation and Performance Optimization of a Full Color Organic Light-Emitting Diodes Display Device." Advanced Materials Research 476-478 (February 2012): 1258–63. http://dx.doi.org/10.4028/www.scientific.net/amr.476-478.1258.

Full text
Abstract:
A full color 2.2″ passive matrix organic light-emitting diodes (OLEDs) with 128 (RGB) * 160 pixels was developed. The display features that driving circuit can transform 18 bits gray-scale data from a PC to the OLED panel via a DVI channel. The size of the pixel was 240μm240μm, while that of mono sub-pixel is 190μm45μm. The lifetime of panel was estimated over 5000h because of the use of dual-scan driving technology, and the power consumption of the display was 300mw about when the average luminance of panel reach 40cd/m2.
APA, Harvard, Vancouver, ISO, and other styles
18

Hasan, Umut, Mamat Sawut, and Shuisen Chen. "Estimating the Leaf Area Index of Winter Wheat Based on Unmanned Aerial Vehicle RGB-Image Parameters." Sustainability 11, no. 23 (2019): 6829. http://dx.doi.org/10.3390/su11236829.

Full text
Abstract:
The leaf area index (LAI) is not only an important parameter for monitoring crop growth, but also an important input parameter for crop yield prediction models and hydrological and climatic models. Several studies have recently been conducted to estimate crop LAI using unmanned aerial vehicle (UAV) multispectral and hyperspectral data. However, there are few studies on estimating the LAI of winter wheat using unmanned aerial vehicle (UAV) RGB images. In this study, we estimated the LAI of winter wheat at the jointing stage on simple farmland in Xinjiang, China, using parameters derived from UAV RGB images. According to gray correlation analysis, UAV RGB-image parameters such as the Visible Atmospherically Resistant Index (VARI), the Red Green Blue Vegetation Index (RGBVI), the Digital Number (DN) of Blue Channel (B) and the Green Leaf Algorithm (GLA) were selected to develop models for estimating the LAI of winter wheat. The results showed that it is feasible to use UAV RGB images for inverting and mapping the LAI of winter wheat at the jointing stage on the field scale, and the partial least squares regression (PLSR) model based on the VARI, RGBVI, B and GLA had the best prediction accuracy (R2 = 0.776, root mean square error (RMSE) = 0.468, residual prediction deviation (RPD) = 1.838) among all the regression models. To conclude, UAV RGB images not only have great potential in estimating the LAI of winter wheat, but also can provide more reliable and accurate data for precision agriculture management.
APA, Harvard, Vancouver, ISO, and other styles
19

Vashpanov, Yuriy, Jung-Young Son, Gwanghee Heo, Tatyana Podousova, and Yong Suk Kim. "Determination of Geometric Parameters of Cracks in Concrete by Image Processing." Advances in Civil Engineering 2019 (October 30, 2019): 1–14. http://dx.doi.org/10.1155/2019/2398124.

Full text
Abstract:
The 8-bit RGB image of a cracked concrete surface, obtained with a high-resolution camera based on a close-distance photographing and using an optical microscope, is used to estimate the geometrical parameters of the crack. The parameters such as the crack’s width, depth, and morphology can be determined by the pixel intensity distribution of the image. For the estimation, the image is transformed into 16-bit gray scale to enhance the geometrical parameters of the crack and then a mathematical relationship relating the intensity distribution with the depth and width is derived based on the enhanced image. This relationship enables to estimate the width and depth with ±10% and ±15% accuracy, respectively, for the crack samples used for the experiments. It is expected that the accuracy can be further improved if the 8-bit RGB image is synthesized by the images of the cracks obtained with different illumination directions.
APA, Harvard, Vancouver, ISO, and other styles
20

Du, Gaoming, Jiting Wu, Hongfang Cao, et al. "A Real-Time Effective Fusion-Based Image Defogging Architecture on FPGA." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 3 (2021): 1–21. http://dx.doi.org/10.1145/3446241.

Full text
Abstract:
Foggy weather reduces the visibility of photographed objects, causing image distortion and decreasing overall image quality. Many approaches (e.g., image restoration, image enhancement, and fusion-based methods) have been proposed to work out the problem. However, most of these defogging algorithms are facing challenges such as algorithm complexity or real-time processing requirements. To simplify the defogging process, we propose a fusional defogging algorithm on the linear transmission of gray single-channel. This method combines gray single-channel linear transform with high-boost filtering according to different proportions. To enhance the visibility of the defogging image more effectively, we convert the RGB channel into a gray-scale single channel without decreasing the defogging results. After gray-scale fusion, the data in the gray-scale domain should be linearly transmitted. With the increasing real-time requirements for clear images, we also propose an efficient real-time FPGA defogging architecture. The architecture optimizes the data path of the guided filtering to speed up the defogging speed and save area and resources. Because the pixel reading order of mean and square value calculations are identical, the shift register in the box filter after the average and the computation of the square values is separated from the box filter and put on the input terminal for sharing, saving the storage area. What’s more, using LUTs instead of the multiplier can decrease the time delays of the square value calculation module and increase efficiency. Experimental results show that the linear transmission can save 66.7% of the total time. The architecture we proposed can defog efficiently and accurately, meeting the real-time defogging requirements on 1920 × 1080 image size.
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, Nalin, and M. Nachamai. "Noise Removal and Filtering Techniques used in Medical Images." Oriental journal of computer science and technology 10, no. 1 (2017): 103–13. http://dx.doi.org/10.13005/ojcst/10.01.14.

Full text
Abstract:
Noise removal techniques have become an essential practice in medical imaging application for the study of anatomical structure and image processing of MRI medical images. To report these issues many de-noising algorithm has been developed like Weiner filter, Gaussian filter, median filter etc. In this research work is done with only three of the above filters which are already mentioned were successfully used in medical imaging. The most commonly affected noises in medical MRI image are Salt and Pepper, Speckle, Gaussian and Poisson noise. The medical images taken for comparison include MRI images, in gray scale and RGB. The performances of these algorithms are examined for various noise types which are salt-and-pepper, Poisson, speckle, blurred and Gaussian Noise. The evaluation of these algorithms is done by the measures of the image file size, histogram and clarity scale of the images. The median filter performs better for removing salt-and-pepper noise and Poisson Noise for images in gray scale, and Weiner filter performs better for removing Speckle and Gaussian Noise and Gaussian filter for the Blurred Noise as suggested in the experimental results.
APA, Harvard, Vancouver, ISO, and other styles
22

Seddik, Hassene, Sondes Tebbini, and Ezzeddine Ben Braiek. "Smart Real Time Adaptive Gaussian Filter Supervised Neural Network for Efficient Gray Scale and RGB Image De-noising." Intelligent Automation & Soft Computing 20, no. 3 (2014): 347–64. http://dx.doi.org/10.1080/10798587.2014.888242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Marwa, Jaleel Mohsin, Kadhim Saad Wasan, J. Hamza Bashar, and A. Jabbar Waheb. "Performance analysis of image transmission with various channel conditions/modulation techniques." TELKOMNIKA Telecommunication, Computing, Electronics and Control 18, no. 3 (2020): 1158–68. https://doi.org/10.12928/TELKOMNIKA.v18i3.14172.

Full text
Abstract:
This paper investigates the impact of different modulation techniques for digital communication systems that employ quadrature phase shift keying (QPSK) and quadrature amplitude modulation (16-QAM and 64-QAM) to transmit images over AWGN and Rayleigh fading channels for the cellular mobile networks. In the further steps, wiener and median filters has been adopted to the simulation are used at the receiver side to remove the impulsive noise present in the received image. This work is performed to evaluate the transmission of two dimensional (2D) gray-scale and color-scale (RGB) images with different values from signal to noise ratios (SNR), such as; (5, 10 and 15) dB over different channels. The correct conclusions are made by comparing many of the observed Matlab simulation results. This was carried out through the results that measure the quality of received image, which is analyzes in terms of SNRimage peak signal to noise ratio (PSNR) and mean square error (MSE).
APA, Harvard, Vancouver, ISO, and other styles
24

ZHOU, YU, YINFEI YANG, MENG YI, XIANG BAI, WENYU LIU, and LONGIN JAN LATECKI. "ONLINE MULTIPLE TARGETS DETECTION AND TRACKING FROM MOBILE ROBOT IN CLUTTERED INDOOR ENVIRONMENTS WITH DEPTH CAMERA." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 01 (2014): 1455001. http://dx.doi.org/10.1142/s0218001414550015.

Full text
Abstract:
Indoor environment is a common scene in our everyday life, and detecting and tracking multiple targets in this environment is a key component for many applications. However, this task still remains challenging due to limited space, intrinsic target appearance variation, e.g. full or partial occlusion, large pose deformation, and scale change. In the proposed approach, we give a novel framework for detection and tracking in indoor environments, and extend it to robot navigation. One of the key components of our approach is a virtual top view created from an RGB-D camera, which is named ground plane projection (GPP). The key advantage of using GPP is the fact that the intrinsic target appearance variation and extrinsic noise is far less likely to appear in GPP than in a regular side-view image. Moreover, it is a very simple task to determine free space in GPP without any appearance learning even from a moving camera. Hence GPP is very different from the top-view image obtained from a ceiling mounted camera. We perform both object detection and tracking in GPP. Two kinds of GPP images are utilized: gray GPP, which represents the maximal height of 3D points projecting to each pixel, and binary GPP, which is obtained by thresholding the gray GPP. For detection, a simple connected component labeling is used to detect footprints of targets in binary GPP. For tracking, a novel Pixel Level Association (PLA) strategy is proposed to link the same target in consecutive frames in gray GPP. It utilizes optical flow in gray GPP, which to our best knowledge has never been done before. Then we "back project" the detected and tracked objects in GPP to original, side-view (RGB) images. Hence we are able to detect and track objects in the side-view (RGB) images. Our system is able to robustly detect and track multiple moving targets in real time. The detection process does not rely on any target model, which means we do not need any training process. Moreover, tracking does not require any manual initialization, since all entering objects are robustly detected. We also extend the novel framework to robot navigation by tracking. As our experimental results demonstrate, our approach can achieve near prefect detection and tracking results. The performance gain in comparison to state-of-the-art trackers is most significant in the presence of occlusion and background clutter.
APA, Harvard, Vancouver, ISO, and other styles
25

Ahmed, Muhammad Waqas, Touseef Sadiq, Hameedur Rahman, et al. "MAPE-ViT: multimodal scene understanding with novel wavelet-augmented Vision Transformer." PeerJ Computer Science 11 (May 23, 2025): e2796. https://doi.org/10.7717/peerj-cs.2796.

Full text
Abstract:
This article introduces Multimodal Adaptive Patch Embedding with Vision Transformer (MAPE-ViT), a novel approach for RGB-D scene classification that effectively addresses fundamental challenges of sensor misalignment, depth noise, and object boundary preservation. Our framework integrates maximally stable extremal regions (MSER) with wavelet coefficients to create comprehensive patch embedding that capture both local and global image features. These MSER-guided patches, incorporating original pixels and multi-scale wavelet information, serve as input to a Vision Transformer, which leverages its attention mechanisms to extract high-level semantic features. The feature discrimination capability is further enhanced through optimization using the Gray Wolf algorithm. The processed features then flow into a dual-stream architecture, where an extreme learning machine handles multi-object classification, while conditional random fields (CRF) manage scene-level categorization. Extensive experimental results demonstrate the effectiveness of our approach, showing significant improvements in classification accuracy compared to existing methods. Our system provides a robust solution for RGB-D scene understanding, particularly in challenging conditions where traditional approaches struggle with sensor artifacts and noise.
APA, Harvard, Vancouver, ISO, and other styles
26

Somasekar, J., Y. C. A. Padmanabha Reddy, and G. Ramesh. "Border Detection of Malaria Infected Cells in Microscopic Images for Diagnosis: A Computer Vision Approach." Journal of Computational and Theoretical Nanoscience 17, no. 9 (2020): 4643–47. http://dx.doi.org/10.1166/jctn.2020.9292.

Full text
Abstract:
A computer vision approach is presented for border detection of malaria infected cells in microscopic blood images for accurate diagnosis. First, the microscopic 24-bits RGB color blood image converted in to 8-bits gray scale image for a single channel procesing. The poroposed two-stage thresholdingmethod used for segmentation of malaria infected cells. Regarding border irregularities, the chosen descriptor is the perimeter factor and 4-connected neighbourhood. The experimental results on benchmark dataset that comprises around 300 images show that the proposed method successfully detects borders of malaria infected cells with no prior knowledge of the contents of the image without parameter tuning. The proposed one compared with other existing methods and results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Al Sasongko, Sudi Mariyanto, Erni Dwi Jayanti, and Suthami Ariessaputra. "Application of Gray Scale Matrix Technique for Identification of Lombok Songket Patterns Based on Backpropagation Learning." JOIV : International Journal on Informatics Visualization 6, no. 4 (2022): 835. http://dx.doi.org/10.30630/joiv.6.4.1532.

Full text
Abstract:
Songket is a woven fabric created by prying the threads and adding more weft to create an embossed decorative pattern on a cotton or silk thread woven background. While songket from many places share similar motifs, when examined closely, the motifs of songket from various regions differ, one of which is in the Province of West Nusa Tenggara, namely Lombok Island. To assist the public in recognizing the many varieties of Lombok songket motifs, the researchers used digital image processing technology, including pattern recognition, to distinguish the distinctive patterns of Lombok songket. The Gray Level Co-occurrence Matrix (GLCM) technique and Backpropagation Neural Networks are used to build a pattern identification system to analyze the Lombok songket theme. Before beginning the feature extraction process, the RGB color image has converted to grayscale (grayscale), which is resized. Simultaneously, a Backpropagation Neural Network is employed to classify Lombok songket theme variations. This study used songket motif photos consisting of a sample of 15 songket motifs with the same color theme that was captured eight times, four of which were used as training data and kept in the database. Four additional photos were utilized as test data or data from sources other than the database. When the system’s ability to recognize the pattern of Lombok songket motifs is tested, the maximum average recognition percentage at a 0° angle is 88.33 percent. In comparison, the lowest average recognition percentage at a 90° angle is 68.33 percent.
APA, Harvard, Vancouver, ISO, and other styles
28

NIDHI, GUPTA, and SHUKLA DOLLEY. "A RECONSTRUCTION OF GRAY SCALE IMAGE INTO RGB COLOR SPACE IMAGE USING YCBCR COLOR SPACING AND LUMINANCE MAPPING IN MATLAB." i-manager’s Journal on Pattern Recognition 4, no. 3 (2017): 17. http://dx.doi.org/10.26634/jpr.4.3.13886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Gillespy, Thurman, and Alan H. Rowberg. "Dual lookup table algorithm: An enhanced method of displaying 16-bit gray-scale images on 8-bit RGB graphic systems." Journal of Digital Imaging 7, no. 1 (1994): 13–17. http://dx.doi.org/10.1007/bf03168474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Dahua, Hui Zhao, Xiangfei Zhao, Qiang Gao, and Liang Xu. "Cucumber Detection Based on Texture and Color in Greenhouse." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 08 (2017): 1754016. http://dx.doi.org/10.1142/s0218001417540167.

Full text
Abstract:
Agriculture robot by mechanical harvesting requires automatic detection and counting of fruits in tree canopy. Because of color similarity, shape irregularity, and background complex, fruit identification turns to be a very difficult task and not to mention to execute pick action. Therefore, green cucumber detection within complex background is a challenging task due to all the above-mentioned problems. In this paper, a technique based on texture analysis and color analysis is proposed for detecting cucumber in greenhouse. RGB image was converted to gray-scale image and HSI image to perform algorithm, respectively. Color analysis was carried out in the first stage to remove background, such as soil, branches, and sky, while keeping green fruit pixels presented cucumbers and leaves as many as possible. In parallel, MSER and HOG were applied to texture analysis in gray-scale image. We can obtain some candidate regions by MSER to obtain the candidate including cucumber. The support vector machine is the classifier used for the identification task. In order to further remove false positives, key points were detected by a SIFT algorithm. Then, the results of color analysis and texture analysis were merged to get candidate cucumber regions. In the last stage, the mathematical morphology operation was applied to get complete cucumber.
APA, Harvard, Vancouver, ISO, and other styles
31

Bappy, Md Imran Hasan. "INSTANTANEOUS LOAD-DEFLECTION BEHAVIOUR OF REINFORCED CONCRETE BEAMS." Suranaree Journal of Science and Technology 30, no. 6 (2024): 010261(1–10). http://dx.doi.org/10.55766/sujst-2023-06-e0134.

Full text
Abstract:
Beam deflection under load and concrete cracking is a ubiquitous phenomenon, present in all types of concrete structures. This research work represents a comparative analysis of experimental and theoretical deflections of simply supported beams with different proportions of mild steel as reinforcement and also different curing conditions i.e. Painting, Air and Water curing. Theoretical deflections were measured using AS 3600-2018, ACI 318-14 and Eurocode 2:2004. To differentiate among these codes Shrinkage was kept variable by different curing conditions. Crack width was measured by image processing using ImageJ software and compared with theoretical results using Eurocode 2:2004 and ACI 224.1R-07. Processing of RGB images with sufficient information will provide the parameters of cracks. RGB images were converted into Gray scale proceed by a filtering and thresholding. Crack parameters were automatically determined by Ridge Detection plugin. Deflections of simply supported beams from experimental and calculation results using AS 3600-2018, Eurocode 2:2004 and ACI 318-14 are very close. Image processing is a very effective technique in the study of crack in concrete and for automatic detection “Ridge Detection” plugin is very helpful.
APA, Harvard, Vancouver, ISO, and other styles
32

Pradeep M and Dr. M Siddappa. "CLASSIFICATION OF RICE USING CONVOLUTIONAL NEURAL NETWORK (CNN)." international journal of engineering technology and management sciences 7, no. 5 (2023): 455–63. http://dx.doi.org/10.46647/ijetms.2023.v07i05.056.

Full text
Abstract:
This paper describes the technique for automatic recognition and classification of different rice grain samples using neural network classifier. The Red Green Blue (RGB), Hue Saturation Intensity (HSI) and Hue Saturation Value (HSV) color models of the image were considered for extracting 18 color features. The classification was carried out using color and texture features separately. The color image was converted to Gray scale image and the Gray Level Co-occurrence Matrixes (GLCM) for four different directions was calculated. A total of eight texture features were calculated from the Co-occurrence matrices. Convolutional Neural Network (CNN) is used for the classification process. The classification accuracy with color features and texture features were compared. Result shows that the classification base on texture features outperform the color feature-based classification even with lesser number of features. It is found that Convolutional Neural network was able to classify two varieties of rice with 100% accuracy using texture features and the edge detection with Sobel and Canny edge detection of the fiber features in the food grain.
APA, Harvard, Vancouver, ISO, and other styles
33

Sutikno, Sutikno, Helmie Arif Wibawa, and Ragil Saputra. "Automatic Detection of Motorcycle on the Road using Digital Image Processing." Scientific Journal of Informatics 6, no. 2 (2018): 203–12. http://dx.doi.org/10.15294/sji.v6i2.20143.

Full text
Abstract:
Traffic accident is one of the causes of death in the world. One of them is traffic accidents on motorcyclist not wearing helmets. To overcome this problem, several researchers have developed detection system of motorcyclist not wear helmet. This system consists of motorcycle detection and motorcyclist head detection. On motorcycle detection, accuracy still needs to be improved. For this reason, this paper proposed motorcycle detection by adding image improvement processes that are enhancing contrast and adding object positioning features.The techniques proposed in this study are divided into 3 stages of image enhancement, feature extraction, and classification. The image enhancement stage consists of enhance contrast, convert RGB image to gray scale, background subtraction, convert gray scale image to binary, closing operation, and small object removal. The features used in this paper are the features of the object area, the circumference of the object, and the location of the object, while the method for classification process using back-propagation neural network and SVM. The proposed method resulted in an accuracy of 96.97%. Error occurs in all image test data not motorcycle objects detected as motorcycle objects. This error is caused because the pixel value between the objects in the image with the background color has a level of difference is too small, so it is detected as an object not a motorcycle.
APA, Harvard, Vancouver, ISO, and other styles
34

Sharma, Himani, D. C. Mishra, R. K. Sharma, and Naveen Kumar. "Crypto-stego System for Securing Text and Image Data." International Journal of Image and Graphics 18, no. 04 (2018): 1850020. http://dx.doi.org/10.1142/s0219467818500201.

Full text
Abstract:
Conventional techniques for security of data, designed by using only one of the security mechanisms, cryptography or steganography, are suitable for limited applications only. In this paper, we propose a crypto-stego system that would be appropriate for secure transmission of different forms of data. In the proposed crypto-stego system, we present a mechanism to provide secure transmission of data by multiple safety measures, firstly by applying encryption using Affine Transform and Discrete Cosine Transform (DCT) and then merging this encrypted data with an image, randomly chosen from a set of available images, and sending the image so obtained to the receiver at the other end through the network. The data to be sent over a communication channel may be a gray-scale or colored image, or a text document (doc, .txt, or .pdf file). As it is encrypted and sent hidden in an image, it avoids any attention to itself by the observers in the network. At the receiver’s side, reverse transformations are applied to obtain the original information. The experimental results, security analysis and statistical analysis for gray-scale images, RGB images, text documents (.doc, .txt, .pdf files), show robustness and appropriateness of the proposed crypto-stego system for secure transmission of the data through unsecured network. The security analysis and key space analysis demonstrate that the proposed technique is immune from cryptanalysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Lu, Ziwei, Tongwei Zhu, Huiyu Zhou, Lanyong Zhang, and Chun Jia. "An Image Enhancement Method for Side-Scan Sonar Images Based on Multi-Stage Repairing Image Fusion." Electronics 12, no. 17 (2023): 3553. http://dx.doi.org/10.3390/electronics12173553.

Full text
Abstract:
The noise interference of side-scan sonar images is stronger than that of optical images, and the gray level is uneven. To solve this problem, we propose a side-scan sonar image enhancement method based on multi-stage repairing image fusion. Firstly, to remove the environmental noise in the sonar image, we perform adaptive Gaussian smoothing on the original image and the weighted average grayscale image. Then, the smoothed images are all processed through multi-stage image repair. The multi-stage repair network consists of three stages. The first two stages consist of a novel encoder–decoder architecture to extract multi-scale contextual features, and the third stage uses a network based on the resolution of the original inputs to generate spatially accurate outputs. Each phase is not a simple stack. Between each phase, the supervised attention module (SAM) improves the repair results of the previous phase and passes them to the next phase. At the same time, the multi-scale cross-stage feature fusion mechanism (MCFF) is used to complete the information lost in the repair process. Finally, to correct the gray level, we propose a pixel-weighted fusion method based on the unsupervised color correction method (UCM), which performs weighted pixel fusion between the RGB image processed by the UCM algorithm and the gray-level image. Compared with the algorithm with the SOTA methods on datasets, our method shows that the peak signal-to-noise ratio (PSNR) is increased by 26.58%, the structural similarity (SSIM) is increased by 0.68%, and the mean square error (MSE) is decreased by 65.02% on average. In addition, the processed image is balanced in terms of image chromaticity, image contrast, and saturation, and the grayscale is balanced to match human visual perception.
APA, Harvard, Vancouver, ISO, and other styles
36

Murk, Hassan Memon, Jamil Saifullah Khanzada Tariq, Memon Sheeraz, and Raheel Hassan Syed. "Blood image analysis to detect malaria using filtering image edges and classification." TELKOMNIKA Telecommunication, Computing, Electronics and Control 17, no. 1 (2019): 194–201. https://doi.org/10.12928/TELKOMNIKA.v17i1.11586.

Full text
Abstract:
Malaria is a most dangerous mosquito borne disease and its infection spread through the infected mosquito. It especially affects the pregnant females and Children less than 5 years age. Malarial species commonly occur in five different shapes, Therefore, to avoid this crucial disease the contemporary researchers have proposed image analysis based solutions to mitigate this death causing disease. In this work, we propose diagnosis algorithm for malaria which is implemented for testing and evaluation in Matlab. We use Filtering and classification along with median filter and SVM classifier. Our proposed method identifies the infected cells from rest of blood images. The Median filtering smoothing technique is used to remove the noise. The feature vectors have been proposed to find out the abnormalities in blood cells. Feature vectors include (Form factor, measurement of roundness, shape, count total number of red cells and parasites). Primary aim of this research is to diagnose malaria by finding out infected cells. However, many techniques and algorithm have been implemented in this field using image processing but accuracy is not up to the point. Our proposed algorithm got more efficient results along with high accuracy as compared to NCC and Fuzzy classifier used by the researchers recently.
APA, Harvard, Vancouver, ISO, and other styles
37

Hiremath, Prakash S., and Rohini A. Bhusnurmath. "Performance Analysis of Anisotropic Diffusion Based Colour Texture Descriptors in Industrial Applications." International Journal of Computer Vision and Image Processing 7, no. 2 (2017): 50–63. http://dx.doi.org/10.4018/ijcvip.2017040104.

Full text
Abstract:
A novel method of colour texture analysis based on anisotropic diffusion for industrial applications is proposed and the performance analysis of colour texture descriptors is examined. The objective of the study is to explore different colour spaces for their suitability in automatic classification of certain textures in industrial applications, namely, granite tiles and wood textures, using computer vision. The directional subbands of digital image of material samples obtained using wavelet transform are subjected to anisotropic diffusion to obtain the texture components. Further, statistical features are extracted from the texture components. The linear discriminant analysis is employed to achieve class separability. The texture descriptors are evaluated on RGB, HSV, YCbCr, Lab colour spaces and compared with gray scale texture descriptors. The k-NN classifier is used for texture classification. For the experimentation, benchmark databases, namely, MondialMarmi and Parquet are considered. The experimental results are encouraging as compared to the state-of-the-art-methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Gwanghyeong, Hyunjung Myung, Donghoon Kim, Sewoon Cho, Sunghwan Jeong, and Byoungjun Kim. "Searching Spectrum Band of Crop Area Based on Deep Learning Using Hyper-spectral Image." Korean Institute of Smart Media 13, no. 8 (2024): 39–48. http://dx.doi.org/10.30693/smj.2024.13.8.39.

Full text
Abstract:
Recently, various studies have emerged that utilize hyperspectral imaging for crop growth analysis and early disease diagnosis. However, the challenge of using numerous spectral bands or finding the optimal bands for crop area remains a difficult problem. In this paper, we propose a method of searching the optimized spectral band of crop area based on deep learning using the hyper-spectral image. The proposed method extracts RGB images within hyperspectral images to segment background and foreground area through a Vision Transformer-based Seformer. The segmented results project onto each band of gray-scale converted hyperspectral images. It determines the optimized spectral band of the crop area through the pixel comparison of the projected foreground and background area. The proposed method achieved foreground and background segmentation performance with an average accuracy of 98.47% and a mIoU of 96.48%. In addition, it was confirmed that the proposed method converges to the NIR regions closely related to the crop area compared to the mRMR method.
APA, Harvard, Vancouver, ISO, and other styles
39

Nuzzolese, Emilio, Matteo Aliberti, and Giancarlo Di Vella. "Colorimetric Study on Burnt Teeth and New Diagnostic Tool in Forensic Dental Identification: The Carbodent Scale." Oral 4, no. 3 (2024): 303–14. http://dx.doi.org/10.3390/oral4030025.

Full text
Abstract:
Background: Teeth are the anatomical tissue with the highest resistance to the action of chemical and physical agents. This is one of the reasons that make teeth particularly useful in the identification process of skeletonized and carbonized human remains. The aim of this research is to analyze the colorimetric changes in the enamel of teeth subjected to high temperatures to develop a reproducible colorimetric cataloging method. Methods: Six groups of 21 human teeth extracted from private clinics and from a Dental School for therapeutic reasons were used and subjected to three temperature ranges in a laboratory furnace: 400 °C, 700 °C, and 1000 °C. For each temperature, two time periods of 20 min and 60 min were chosen. Each group of dental elements was analyzed using a dental spectrophotometer to extract the colorimetric data of the crown. The obtained color coordinates were subsequently converted into Red–Green–Blue (RGB) values. The two predominant colors were also selected to create average colorimetric values, which demonstrate the change in color hue according to temperature. The groups of teeth subjected to 20 min at 400 °C exhibited a dark gray coloration, while the teeth subjected to 20 min at 700 °C showed a general increase in color brightness with beige–blueish hues. Results: The teeth subjected to 20 min at 1000 °C displayed progressively lighter shades with pinkish reflections. The teeth subjected to 60 min at the same temperatures demonstrated a general increase in brightness, making differentiation more challenging, except for the group of teeth burned at 400 °C, which showed light gray–blueish tones. Conclusion: This study further supports the existing literature on the correlation between colorimetric shifts in carbonized teeth and the maximum temperature reached, providing valuable assistance to forensic pathology and the forensic dental identification of burnt human remains. Additionally, this research has led to the development of a standardized colorimetric patented scale for the observation and examination of burnt human teeth.
APA, Harvard, Vancouver, ISO, and other styles
40

Reder, Leonard J., and Michael Farris. "A Tour up the Gray Scale Vector of the RGB Color Cube: How Computer Graphics Color Spaces Relate to Digital Video Color Difference Space." SMPTE Journal 111, no. 6-7 (2002): 330–42. http://dx.doi.org/10.5594/j15331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Abir, Chakraborty. "Comparison of Signal to Noise Ratio of Colored and Gray Scale Image in Clustered Condition from the Contours of the Images with the Help of Different Image Filtering Method." Indian Journal of Image Processing and Recognition (IJIPR) 4, no. 3 (2024): 10–14. https://doi.org/10.54105/ijipr.D1029.04030424.

Full text
Abstract:
<strong>Abstract:</strong> As we know the image can be processed with the help of different types of coding for example mat-lab. Here in this paper we are primarily focusing on some common filtering methodologies [5] related to image contour in clustered conditions. For filtering purpose in this paper we have used three different filtering technologies such as prewitt [3], sobel [3], canny [3] filtering. But on the other hand we have used both colored [1] and non-colored [3] images for clustering operations. Our main aim in this paper to show variations of signal to noise ratios for the colored and non-colored contour images with and without filtering. As per my request study the discussion of results very carefully to realize the deeper meaning of the journal [4].
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Guanshi, Shengkui Tian, Yankun Mo, Ruyi Chen, and Qingsong Zhao. "On the Acquisition of High-Quality Digital Images and Extraction of Effective Color Information for Soil Water Content Testing." Sensors 22, no. 9 (2022): 3130. http://dx.doi.org/10.3390/s22093130.

Full text
Abstract:
Soil water content (SWC) is a critical indicator for engineering construction, crop production, and the hydrologic cycle. The rapid and accurate assessment of SWC is of great importance. At present, digital images are becoming increasingly popular in environmental monitoring and soil property analysis owing to the advantages of non-destructiveness, cheapness, and high-efficiency. However, the capture of high-quality digital image and effective color information acquisition is challenging. For this reason, a photographic platform with an integrated experimental structure configuration was designed to yield high-quality soil images. The detrimental parameters of the platform including type and intensity of the light source and the camera shooting angle were determined after systematic exploration. A new method based on Gaussian fitting gray histogram for extracting RGB image feature parameters was proposed and validated. The correlation between 21 characteristic parameters of five color spaces (RGB, HLS, CIEXYZ, CIELAB, and CIELUV) and SWC was investigated. The model for the relationship between characteristic parameters and SWC was constructed by using least squares regression (LSR), stepwise regression (STR), and partial least squares regression (PLSR). Findings showed that the camera platform equipped with 45° illumination D65 light source, 90° shooting angle, 1900~2500 lx surface illumination, and operating at ambient temperature difference of 5 °C could produce highly reproducible and stable soil color information. The effects of image scale had a great influence on color feature extraction. The entire area of soil image, i.e., 3,000,000 pixels, was chosen in conjunction with a new method for obtaining color features, which is beneficial to eliminate the interference of uneven lightness and micro-topography of soil samples. For the five color spaces and related 21 characteristic parameters, RGB and CIEXYZ spaces and characteristic parameter of lightness both exhibited the strongest correlation with SWC. The PLSR model based on soil specimen images ID had an excellent predictive accuracy and the best stability (R2 = 0.999, RMSE = 0.236). This study showed the potential of the application of color information of digital images to predict SWC in agriculture and geotechnical engineering.
APA, Harvard, Vancouver, ISO, and other styles
43

BANGASH, RUBAB FATIMA, Imran Tauqir, and Azka Maqsood. "Two-Dimensional Wavelet based Medical Videos using Hidden Markov Tree Model." KIET Journal of Computing and Information Sciences 4, no. 1 (2021): 13. http://dx.doi.org/10.51153/kjcis.v4i1.60.

Full text
Abstract:
Wavelet based statistical image denoising is vital preprocessing technique in real world imaging. Most of the medical videos inherit system introduced noise during acquisition as a result of additional image capturing techniques, resulting in the poor quality of video for examination. So, we needed a trade-off between preservation and noise reduction of the actual image content to retains all the necessary information. The existing techniques are based on time-frequency domain where the wavelet coefficients need to be independent or jointly Gaussian. In denoising arena there is a need to exploit the temporal dependencies of wavelet coefficients with non-Gaussian nature. Here we present a YCbCr (Luminance-Chrominance) based denoising strategy on Hidden Markov Model (HMM) based on Multiresolution Analysis in the framework of Expectation-Maximization algorithm. Proposed algorithm applies denoising technique independently on each frame of the video. It models Non-Gaussian statistics of each wavelet coefficient and captures the statistical dependencies between coefficients. Denoised frames are restored inversely by processing the wavelet coefficients. Significant results are visualized through objective as well as subjective analysis. Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Metric (SSIM) like parameters are used for the quality assessment of proposed method in comparison with Gray scale and Red, Green, Blue (RGB) scale videos coefficients.
APA, Harvard, Vancouver, ISO, and other styles
44

Wawrzyk-Bochenek, Iga, Mansur Rahnama, Sławomir Wilczyński, and Anna Wawrzyk. "Quantitative Assessment of Hyperpigmentation Changes in Human Skin after Microneedle Mesotherapy Using the Gray-Level Co-Occurrence Matrix (GLCM) Method." Journal of Clinical Medicine 12, no. 16 (2023): 5249. http://dx.doi.org/10.3390/jcm12165249.

Full text
Abstract:
Aim: The aim of the study was to quantitatively assess the effectiveness of microneedle mesotherapy in reducing skin discoloration. The results were analyzed using the gray-level co-occurrence matrix (GLCM) method. Material and methods: The skin of the forearm (7 × 7 cm) of 12 women aged 29 to 68 was examined. Microneedle mesotherapy was performed using a dermapen with a preparation containing 12% ascorbic acid. Each of the volunteers underwent a series of four microneedle mesotherapy treatments. The effectiveness of the treatment was quantified using the methods of image analysis and processing. A series of clinical images were taken in cross-polarized light before and after a series of cosmetic procedures. Then, the treated areas were analyzed by determining the parameters of the gray-level co-occurrence matrix (GLCM) algorithm: contrast and homogeneity. Results: During image pre-processing, the volunteers’ clinical images were separated into red (R), green (G) and blue (B) channels. The photos taken after the procedure show an increase in skin brightness compared to the photos taken before the procedure. The average increase in skin brightness after the treatment was 10.6%, the average decrease in GLCM contrast was 10.7%, and the average homogeneity increased by 14.5%. Based on the analysis, the greatest differences in the GLCM contrast were observed during tests performed in the B channel of the RGB scale. With a decrease in GLCM contrast, an increase in postoperative homogeneity of 0.1 was noted, which is 14.5%.
APA, Harvard, Vancouver, ISO, and other styles
45

Deshmukh, Kalyani, and S. D. Mali. "LANDING ASSISTANCE AND EVALUATION USING IMAGE PROCESSING." International Journal of Research -GRANTHAALAYAH 3, no. 6 (2021): 84–92. http://dx.doi.org/10.29121/granthaalayah.v3.i6.2015.3003.

Full text
Abstract:
Most of the landing systems are based on GPS and radar altimeter, sonar, infrared. But in urban environments buildings and other obstacles disturb the GPS signal and can even cause loss of signal. In such case it will be beneficial to have independent control of navigation and landing assistance system. So the main aim is to design a software system that will assist helicopter or Unmanned Aerial Vehicle accurately under all-weather. The software system takes height parameter and images from helicopter or Unmanned Aerial Vehicle as an input. After applying number of processing techniques like edge detection, RGB to Gray scale on the image, the image is compared with the HSV dataset to find the free space. For edge detection Canny edge detection algorithm is used. From the number of free spaces nearest patch is selected by taking vehicle dimension and landing orientation of the vehicle into consideration. Performance of the system depends on the accuracy and the speed of the system. This system also resolves the potentially dangerous problem of GPS denial.
APA, Harvard, Vancouver, ISO, and other styles
46

Qashlim, Akhmad, Basri Basri, Haeruddin Haeruddin, Ardan Ardan, Inggrid Nurtanio, and Amil Ahmad Ilham. "Smartphone Technology Applications for Milkfish Image Segmentation Using OpenCV Library." International Journal of Interactive Mobile Technologies (iJIM) 14, no. 08 (2020): 150. http://dx.doi.org/10.3991/ijim.v14i08.12423.

Full text
Abstract:
This research presents the use of smartphone technology to assist fisheries work. Specifically, we designed an Android application that utilizes a camera connected to the internet to detect RGB image objects and then convert them to HSV and gray scale. In this paper, Android-based smartphone technology using image processing methods will be discussed, a digital tool that provides fish detection results in the form of length, width, and weight used to determine the price of fish. This application was created using features provided by the OpenCV library to produce binary images. Three main challenges highlighted during application design including C ++ QT were used to build the user interface, the contour-active method was used to divide and separate image objects from the back-ground, while the clever edge edge method was used to improve the outline ap-pearance of objects. Both methods are implemented on the Android platform and utilize smartphone cameras as an identification tool. This application makes it possible to provide many benefits and great benefits for farm farmers but on the other hand will create technological gaps
APA, Harvard, Vancouver, ISO, and other styles
47

Rai, Ankush, and Jagadeesh Kannan R. "A SURVEY OF MULTISPECTRAL IMAGE DENOISING METHODS FOR SATELLITE IMAGERY APPLICATIONS." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (2017): 292. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19740.

Full text
Abstract:
In comparison with the standard RGB or gray-scale images, the usual multispectral images (MSI) are intended to convey high definition and anauthentic representation for real world scenes to significantly enhance the performance measures of several other tasks involving with computervision, segmentation of image, object extraction, and object tagging operations. While procuring images form satellite, the MSI are often prone tonoises. Finding a good mathematical description of the learning-based denoising model is a difficult research question and many different researchesaccounted in the literature. Many have attempted its use with the application of neural network as a sparse learned dictionary of noisy patches.Furthermore, this approach allows several algorithm to optimize itself for the given task at hand using machine learning algorithm. However, inpractices, a MSI image is always prone to corruption by various sources of noises while procuring the images. In this survey, we studied the pasttechniques attempted for the noise influenced MSI images. The survey presents the outline of past techniques and their respective advantages incomparison with each other.
APA, Harvard, Vancouver, ISO, and other styles
48

Kalyani, Deshmukh, and D. Mali S. "LANDING ASSISTANCE AND EVALUATION USING IMAGE PROCESSING." International Journal of Research -GRANTHAALAYAH 3, no. 6 (2017): 84–92. https://doi.org/10.5281/zenodo.803442.

Full text
Abstract:
Most of the landing systems are based on GPS and radar altimeter, sonar, infrared. But in urban environments buildings and other obstacles disturb the GPS signal and can even cause loss of signal. In such case it will be beneficial to have independent control of navigation and landing assistance system. So the main aim is to design a software system that will assist helicopter or Unmanned Aerial Vehicle accurately under all-weather. The software system takes height parameter and images from helicopter or Unmanned Aerial Vehicle as an input. After applying number of processing techniques like edge detection, RGB to Gray scale on the image, the image is compared with the HSV dataset to find the free space. For edge detection Canny edge detection algorithm is used. From the number of free spaces nearest patch is selected by taking vehicle dimension and landing orientation of the vehicle into consideration. Performance of the system depends on the accuracy and the speed of the system. This system also resolves the potentially dangerous problem of GPS denial.
APA, Harvard, Vancouver, ISO, and other styles
49

Fu, Yuanyuan, Guijun Yang, Zhenhai Li, et al. "Winter Wheat Nitrogen Status Estimation Using UAV-Based RGB Imagery and Gaussian Processes Regression." Remote Sensing 12, no. 22 (2020): 3778. http://dx.doi.org/10.3390/rs12223778.

Full text
Abstract:
Predicting the crop nitrogen (N) nutrition status is critical for optimizing nitrogen fertilizer application. The present study examined the ability of multiple image features derived from unmanned aerial vehicle (UAV) RGB images for winter wheat N status estimation across multiple critical growth stages. The image features consisted of RGB-based vegetation indices (VIs), color parameters, and textures, which represented image features of different aspects and different types. To determine which N status indicators could be well-estimated, we considered two mass-based N status indicators (i.e., the leaf N concentration (LNC) and plant N concentration (PNC)) and two area-based N status indicators (i.e., the leaf N density (LND) and plant N density (PND)). Sixteen RGB-based VIs associated with crop growth were selected. Five color space models, including RGB, HSV, L*a*b*, L*c*h*, and L*u*v*, were used to quantify the winter wheat canopy color. The combination of Gaussian processes regression (GPR) and Gabor-based textures with four orientations and five scales was proposed to estimate the winter wheat N status. The gray level co-occurrence matrix (GLCM)-based textures with four orientations were extracted for comparison. The heterogeneity in the textures of different orientations was evaluated using the measures of mean and coefficient of variation (CV). The variable importance in projection (VIP) derived from partial least square regression (PLSR) and a band analysis tool based on Gaussian processes regression (GPR-BAT) were used to identify the best performing image features for the N status estimation. The results indicated that (1) the combination of RGB-based VIs or color parameters only could produce reliable estimates of PND and the GPR model based on the combination of color parameters yielded a higher accuracy for the estimation of PND (R2val = 0.571, RMSEval = 2.846 g/m2, and RPDval = 1.532), compared to that based on the combination of RGB-based VIs; (2) there was no significant heterogeneity in the textures of different orientations and the textures of 45 degrees were recommended in the winter wheat N status estimation; (3) compared with the RGB-based VIs and color parameters, the GPR model based on the Gabor-based textures produced a higher accuracy for the estimation of PND (R2val = 0.675, RMSEval = 2.493 g/m2, and RPDval = 1.748) and the PLSR model based on the GLCM-based textures produced a higher accuracy for the estimation of PNC (R2val = 0.612, RMSEval = 0.380%, and RPDval = 1.601); and (4) the combined use of RGB-based VIs, color parameters, and textures produced comparable estimation results to using textures alone. Both VIP-PLSR and GPR-BAT analyses confirmed that image textures contributed most to the estimation of winter wheat N status. The experimental results reveal the potential of image textures derived from high-definition UAV-based RGB images for the estimation of the winter wheat N status. They also suggest that a conventional low-cost digital camera mounted on a UAV could be well-suited for winter wheat N status monitoring in a fast and non-destructive way.
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, Chun-Han, Kuang-Yu Chen, and Li-yu Daisy Liu. "Effect of Texture Feature Distribution on Agriculture Field Type Classification with Multitemporal UAV RGB Images." Remote Sensing 16, no. 7 (2024): 1221. http://dx.doi.org/10.3390/rs16071221.

Full text
Abstract:
Identifying farmland use has long been an important topic in large-scale agricultural production management. This study used multi-temporal visible RGB images taken from agricultural areas in Taiwan by UAV to build a model for classifying field types. We combined color and texture features to extract more information from RGB images. The vectorized gray-level co-occurrence matrix (GLCMv), instead of the common Haralick feature, was used as texture to improve the classification accuracy. To understand whether changes in the appearance of crops at different times affect image features and classification, this study designed a labeling method that combines image acquisition times and land use type to observe it. The Extreme Gradient Boosting (XGBoost) algorithm was chosen to build the classifier, and two classical algorithms, the Support Vector Machine and Classification and Regression Tree algorithms, were used for comparison. In the testing results, the highest overall accuracy reached 82%, and the best balance accuracy across categories reached 97%. In our comparison, the color feature provides the most information about the classification model and builds the most accurate classifier. If the color feature were used with the GLCMv, the accuracy would improve by about 3%. In contrast, the Haralick feature does not improve the accuracy, indicating that the GLCM itself contains more information that can be used to improve the prediction. It also shows that with combined image acquisition times in the label, the within-group sum of squares can be reduced by 2–31%, and the accuracy can be increased by 1–2% for some categories, showing that the change of crops over time was also an important factor of image features.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography