Dissertations / Theses on the topic 'Histogram equalization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Histogram equalization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Kurak, Charles W. Jr. "Adaptive Histogram Equalization, a Parallel Implementation." UNF Digital Commons, 1990. http://digitalcommons.unf.edu/etd/260.
Full textYakoubian, Jeffrey Scott. "Adaptive histogram equalization for mammographic image processing." Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/16387.
Full textKvapil, Jiří. "Adaptivní ekvalizace histogramu digitálních obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228687.
Full textGomes, David Menotti. "Contrast enhancement in digital imaging using histogram equalization." Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00470545.
Full textGaddam, Purna Chandra Srinivas Kumar, and Prathik Sunkara. "Advanced Image Processing Using Histogram Equalization and Android Application Implementation." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13735.
Full textSkosan, Marshalleno. "Histogram equalization for robust text-independent speaker verification in telephone environments." Master's thesis, University of Cape Town, 2005. http://hdl.handle.net/11427/5103.
Full textGatti, Pruthvi Venkatesh, and Krishna Teja Velugubantla. "Contrast Enhancement of Colour Images using Transform Based Gamma Correction and Histogram Equalization." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14424.
Full textMallampati, Vivek. "Image Enhancement & Automatic Detection of Exudates in Diabetic Retinopathy." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18109.
Full textNaram, Hari Prasad. "Classification of Dense Masses in Mammograms." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1528.
Full textJomaa, Diala. "Fingerprint Segmentation." Thesis, Högskolan Dalarna, Datateknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:du-4264.
Full textPehrson, Skidén Ottar. "Automatic Exposure Correction And Local Contrast Setting For Diagnostic Viewing of Medical X-ray Images." Thesis, Linköping University, Department of Biomedical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-56630.
Full textTo properly display digital X-ray images for visual diagnosis, a proper display range needs to be identified. This can be difficult when the image contains collimators or large background areas which can dominate the histograms. Also, when there are both underexposed and overexposed areas in the image it is difficult to display these properly at the same time. The purpose of this thesis is to find a way to solve these problems. A few different approaches are evaluated to find their strengths and weaknesses. Based on Local Histogram Equalization, a new method is developed to put various constraints on the mapping. These include alternative ways to perform the histogram calculations and how to define the local histograms. The new method also includes collimator detection and background suppression to keep irrelevant parts of the image out of the calculations. Results show that the new method enables proper display of both underexposed and overexposed areas in the image simultaneously while maintaining the natural look of the image. More testing is required to find appropriate parameters for various image types.
Saikaley, Andrew Grey. "Imaging, characterization and processing with axicon derivatives." Thesis, Laurentian University of Sudbury, 2013. https://zone.biblio.laurentian.ca/dspace/handle/10219/2039.
Full textEngelhardt, Erik, and Simon Jäger. "An evaluation of image preprocessing for classification of Malaria parasitization using convolutional neural networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260090.
Full textI denna studie studerades effekten av flera bildförbehandlingsmetoder på Convolutional Neural Networks (CNN). Mätvärden såsom accuracy, precision, recall och F1-score (Hossin et al. 2011) utvärderades. Specifikt är denna studie inriktad på malariaklassificering med hjälp av ett dataset som tillhandahålls av U.S. National Library of Medicine (Malaria Datasets n.d.). Detta dataset innehåller bilder av tunna blodutstryk, med segmenterade oinfekterade och parasiterade blodceller. I denna studie föreslogs 3 CNN-modeller för parasiteringsklassificeringen. Varje modell tränades på det ursprungliga datasetet och 4 förbehandlade dataset. De förbehandlingsmetoder som användes för att skapa de 4 dataseten var gråskala, normalisering, histogramutjämning och kontrastbegränsad adaptiv histogramutjämning (CLAHE). Effekten av CLAHE-förbehandlingen gav en förbättring av 1.46% (modell 1) och 0.61% (modell 2) jämfört med det ursprungliga datasetet, vad gäller F1-score. En modell (modell 3) gav inget resultat. Resultaten visar att CNN:er kan användas för parasiteringsklassificering, men effekten av förbehandling är begränsad.
GAJJELA, VENKATA SARATH, and SURYA DEEPTHI DUPATI. "Mobile Application Development with Image Applications Using Xamarin." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15838.
Full textThai, Ba chien. "Tone Mapping Operators for High Dynamic Range Images." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD082.
Full textHe conversion of High Dynamic Range (HDR) image into Low Dynamic Range (LDR) image is investigated so that the visual rendering of the latter is of good quality. The first contribution focused on the contrast enhancement of the tone mapped image using a piecewise linear function as a non-uniform histogram equalization adjustment to mode! the "s-shaped" curve of the human visual adaptation. The second and third contributions are concerned with the details preservation of the HDR image on the tone mapped image. Separable and non-separable multiresolution approaches based on essential non-oscillatory strategies, taking into account the HDR image singularities in the mathematical mode! derivation, are proposed. The fourth contribution not only preserves details but also enhances the contrast of the HDR tone mapped image. A separable "near optimal" lifting scheme using an adaptive powerful prediction step is proposed. The latter relies on a linear weighted combination depending on the neighbouring coefficients to extract the relevant fin est details on the HDR image at each resolution level. A piecewise linear mapping is then applied on the coarse reconstruction. Simulation results provide good performance, both in terms of visual quality and Tone Mapped Quality Index (TMQI) metric, compared to existing competitive tone mapping approaches. The impact of the TMQI parameters on the visual quality of the tone mapped images is discussed. The proposed parameters show a strong correlation between the modified metric and
Мойсей, Павло Ігорович, and Pavlo Moisei. "Метод обробки зображень для верифікації особи в телекомунікаційних системах." Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2020. http://elartu.tntu.edu.ua/handle/lib/33278.
Full textThe diploma work is devoted to the substantiation of method of processing image for identity verification in telecommunication systems using the Laplace filter and the equalization of the image histogram. The analysis of face recognition methods is carried out, the structural schemes of face verification systems in telecommunication networks for face identification are presented and the method of image processing is substantiated, which gives an opportunity to increase the reliability and speed of verification systems.
ВСТУП 9 РОЗДІЛ 1. АНАЛІТИЧНА ЧАСТИНА 11 1.1. Задачарозпізнавання особи в телекомунікаційних системах 11 1.2. Задача верифікації особи в телекомунікаційних системах 15 1.3. Класифікація методів верифікації особи 17 1.4. Принципи верифікації особи 18 1.5. Висновки до розділу 1 20 РОЗДІЛ 2. ОСНОВНА ЧАСТИНА 21 2.1. Методи верифікації особи за параметрами обличчя 21 2.2. Алгоритм роботи методів розпізнавання особи 22 2.3. Системи верифікації особи в телекомунікаційних мережах 30 2.4. Методи обробки зображення 37 2.5. Висновки до розділу 2 54 РОЗДІЛ 3. НАУКОВО-ДОСЛІДНА ЧАСТИНА 55 3.1. Обгрунтування методу обробки зображення 55 3.1.1.Еквалізація гістограми 56 3.1.2. Фільтр Лапласа 57 3.2.Реєстрація експериментальних даних 59 3.3.Обробка зображення для верифікації особи 62 3.4.Аналіз та оцінка зображення 66 3.5. Висновки до розділу 3 73 РОЗДІЛ 4.ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 75 4.1. Охорона праці під час роботи з персональним комп’ютером при виконанні наукового дослідження 75 4.2. Забезпечення надійності роботи телекомунікаційних систем до дії уражаючих факторів надзвичайних ситуацій 77 4.3 Висновки до розділу 4 84 ВИСНОВКИ 85 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 87 Додаток А Копія тези конференції 89 Додаток Б Лістинг програми 92
Martišek, Karel. "Adaptivní filtry pro 2-D a 3-D zpracování digitálních obrazů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-234015.
Full textMartišek, Karel. "Adaptive Filters for 2-D and 3-D Digital Images Processing." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-234150.
Full textNguyen, Tan-Sy. "A smart system for processing and analyzing gastrointestinal abnormalities in wireless capsule endoscopy." Electronic Thesis or Diss., Paris 13, 2023. http://www.theses.fr/2023PA131052.
Full textIn this thesis, we address the challenges associated with the identification and diagnosis of pathological lesions in the gastrointestinal (GI) tract. Analyzing massive amounts of visual information obtained by Wireless Capsule Endsocopy (WCE) which is an excellent tool for visualizing and examining the GI tract (including the small intestine), poses a significant burden on clinicians, leading to an increased risk of misdiagnosis.In order to alleviate this issue, we develop an intelligent system capable of automatically detecting and identifying various GI disorders. However, the limited quality of acquired images due to distortions such as noise, blur, and uneven illumination poses a significant obstacle. Consequently, image pre-processing techniques play a crucial role in improving the quality of captured frames, thereby facilitating subsequent high-level tasks like abnormality detection and classification. In order to tackle the issues associated with limitations in image quality caused by the aforementioned distortions, novel learning-based algorithms have been proposed. More precisely, recent advancements in the realm of image restoration and enhancement techniques rely on learning-based approaches that necessitate pairs of distorted and reference images for training. However, a significant challenge arises in WCE which is an excellent tool for visualizing and diagnosing GI disorders, due to the absence of a dedicated dataset for evaluating image quality. To the best of our knowledge, there currently exists no specialized dataset designed explicitly for evaluating video quality in WCE. Therefore, in response to the need for an extensive video quality assessment dataset, we first introduce the "Quality-Oriented Database for Video Capsule Endoscopy" (QVCED).Subsequently, our findings show that assessing distortion severity significantly improves image enhancement effectiveness, especially in the case of uneven illumination. To this end, we propose a novel metric dedicated to the evaluation and quantification of uneven illumination in laparoscopic or WCE images, by extracting the image's background illuminance and considering the mapping effect of Histogram Equalization. Our metric outperforms some state-of-the-art No-Reference Image Quality Assessment (NR-IQA) methods, demonstrating its superiority and competitive performance compared to Full-Reference IQA (FR-IQA) methods.After conducting the assessment step, we proceed to develop an image quality enhancement method aimed at improving the overall quality of the images. This is achieved by leveraging the cross-attention algorithm, which establishes a comprehensive connection between the extracted distortion level and the degraded regions within the images. By employing this algorithm, we are able to precisely identify and target the specific areas in the images that have been affected by distortions. This allows an appropriate enhancement tailored to each degraded region, thereby effectively improving the image quality.Following the improvement of image quality, visual features are extracted and fed into a classifier to provide a diagnosis through classification. The challenge in the WCE domain is that a significant portion of the data remains unlabeled. To overcome this challenge, we have developed an efficient method based on self-supervised learning (SSL) approach to enhance the performance of classification. The proposed method, utilizing attention-based SSL, has successfully addressed the issue of limited labeled data commonly encountered in the existing literature
Honório, Tatiane Cruz de Souza. "Modelos de compressão de dados para classificação e segmentação de texturas." Universidade Federal da Paraíba, 2010. http://tede.biblioteca.ufpb.br:8080/handle/tede/6044.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This work analyzes methods for textures images classification and segmentation using lossless data compression algorithms models. Two data compression algorithms are evaluated: the Prediction by Partial Matching (PPM) and the Lempel-Ziv-Welch (LZW) that had been applied in textures classification in previous works. The textures are pre-processed using histogram equalization. The classification method is divided into two stages. In the learning stage or training, the compression algorithm builds statistical models for the horizontal and the vertical structures of each class. In the classification stage, samples of textures to be classified are compressed using models built in the learning stage, sweeping the samples horizontally and vertically. A sample is assigned to the class that obtains the highest average compression. The classifier tests were made using the Brodatz textures album. The classifiers were tested for various contexts sizes (in the PPM case), samples number and training sets. For some combinations of these parameters, the classifiers achieved 100% of correct classifications. Texture segmentation process was made only with the PPM. Initially, the horizontal models are created using eight textures samples of size 32 x 32 pixels for each class, with the PPM context of a maximum size 1. The images to be segmented are compressed by the models of classes, initially in blocks of size 64 x 64 pixels. If none of the models achieve a compression ratio at a predetermined interval, the block is divided into four blocks of size 32 x 32. The process is repeated until a model reach a compression ratio in the range of the compression ratios set for the size of the block in question. If the block get the 4 x 4 size it is classified as belonging to the class of the model that reached the highest compression ratio.
Este trabalho se propõe a analisar métodos de classificação e segmentação de texturas de imagens digitais usando algoritmos de compressão de dados sem perdas. Dois algoritmos de compressão são avaliados: o Prediction by Partial Matching (PPM) e o Lempel-Ziv-Welch (LZW), que já havia sido aplicado na classificação de texturas em trabalhos anteriores. As texturas são pré-processadas utilizando equalização de histograma. O método de classificação divide-se em duas etapas. Na etapa de aprendizagem, ou treinamento, o algoritmo de compressão constrói modelos estatísticos para as estruturas horizontal e vertical de cada classe. Na etapa de classificação, amostras de texturas a serem classificadas são comprimidas utilizando modelos construídos na etapa de aprendizagem, varrendo-se as amostras na horizontal e na vertical. Uma amostra é atribuída à classe que obtiver a maior compressão média. Os testes dos classificadores foram feitos utilizando o álbum de texturas de Brodatz. Os classificadores foram testados para vários tamanhos de contexto (no caso do PPM), amostras e conjuntos de treinamento. Para algumas das combinações desses parâmetros, os classificadores alcançaram 100% de classificações corretas. A segmentação de texturas foi realizada apenas com o PPM. Inicialmente, são criados os modelos horizontais usados no processo de segmentação, utilizando-se oito amostras de texturas de tamanho 32 x 32 pixels para cada classe, com o contexto PPM de tamanho máximo 1. As imagens a serem segmentadas são comprimidas utilizando-se os modelos das classes, inicialmente, em blocos de tamanho 64 x 64 pixels. Se nenhum dos modelos conseguir uma razão de compressão em um intervalo pré-definido, o bloco é dividido em quatro blocos de tamanho 32 x 32. O processo se repete até que algum modelo consiga uma razão de compressão no intervalo de razões de compressão definido para o tamanho do bloco em questão, podendo chegar a blocos de tamanho 4 x 4 quando o bloco é classificado como pertencente à classe do modelo que atingiu a maior taxa de compressão.
Wang, Chu-Hsuan, and 王楚軒. "Robust indoor localization using histogram equalization." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5vkxfq.
Full text元智大學
電機工程學系
104
Indoor positioning systems have received increasing attention for supporting locationbased services in indoor environments. Received Signal Strength (RSS), mostly utilized ngerprinting systems in Wi-Fi, is known to be unreliable due to environmental and hardware eects. The PHY layer information about channel quality known as Channel State Information (CSI) can be used due to its frequency diversity (OFDM sub-carriers) and spatial diversity (multiple antennas). The extension of CSI dimensions causes over-tting should be considered. This paper proposes two approaches based on histogram equalization (HEQ) and information theoretic learning (ITL) to compensate for hardware variation, orientation mismatch and over-tting problems in robust localization system. The proposed method involves converting the temporal{spatial radio signal strength into a reference function (i.e., equalizing the histogram). This paper makes two principal contributions: First, the equalized RF signal is capable of improving the robustness of location estimation, and second, ITL greater discriminative components provides increased exibility in determining the number of required components and achieves better computational eciency.
Bhubaneswari, M. "Optimized Histogram Equalization for Image Enhancement." Thesis, 2015. http://ethesis.nitrkl.ac.in/6802/1/Optimized_Bhubaneswari_2015.pdf.
Full textChung, Xin-fang, and 鍾欣芳. "Simulation of Histogram Equalization for Classification Problem." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/6xk7u8.
Full text國立臺灣科技大學
資訊管理系
99
Histogram equalization (HEQ) is a technology for improving the darkness and the brightness of the image by adjusting the gray levels based on the cumulative distribution function (CDF). In recent years, this method has been applied to different issues, including robust speech recognition for solving the mismatch between the noisy speech and the clean speech, and natural language processing for the cross-database problem. This paper analyzed how histogram equalization may influence a simple classification problem by simulation. The results showed the rough curve of CDF caused by insufficient data would lead to the poor mapping between training and test data and degrade the performance. Direct and indirect operations of histogram equalization achieve similar performance for linear or non-linear transformation, while the performance of the indirect one is more sensitive to type of classifiers. With sufficient amount of training data, HEQ and mean-standard deviation weight (MSW) can achieve compatible performances for linear transformation, while HEQ appears superior for nonlinear transformation.
Jhan, Shih-Sian, and 詹士賢. "Sobel Histogram Equalization for Image Contrast Enhancement." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/76534008994939627485.
Full text立德管理學院
應用資訊研究所
95
Contrast enhancement is an important technique for image processing. Although many contrast enhancement methods had been proposed, these designed methods do not focus on the edge quality of image. In this study, the sobel histogram equalization (SHE) is proposed to enhance the contrast of image. In SHE, the image is divided into two regions, edge and non-edge, by using the sobel edge detector. The contrast of these two regions can be individually enhanced, and then these two regions can be merged into a whole image by the histogram equalization. In our experiments, SHE outperforms other methods.
Chuang, Chialung, and 莊佳龍. "Piece-Wise Histogram Equalization For Image Enhancement." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/03838447444191654772.
Full text義守大學
資訊工程學系碩士在職專班
100
Histogram equalization (HE), which has been intensively studied for decades, is one of the most popular technologies because it can produce high performance results without complex parameters. Histogram equalization is widely used for a variety of image applications, for instance, radar signal processing and medical image processing. However, HE suffers from choosing a proper dynamic range, which could over-enhance images and causes poor visual quality. Common HE methods use piece-wise algorithm that decomposes input image into N sub-images, and then enhances the sub-images individually. Result image is a combination of the enhanced sub-images. However, existing piece-wise algorithms do not guarantee successful enhancement. In this thesis, we propose a novel piece-wise algorithm that uses ‘’unilateralism’’ method to enhance the image details without loosing the original brightness of the source image. Results indicate the proposed method provides efficient enhancement. Furthermore, the proposed method is extended to enhance color images. Simulation results are demonstrated and discussed.
Yi-ShanLin and 林怡珊. "Partitioned Dynamic Range Histogram and Its Application to Obtain Better Histogram Equalization." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/37273415144723906115.
Full text國立成功大學
電腦與通信工程研究所
101
Image contrast enhancement algorithms have been designed to adjust contrast conforming to human visual perception. Histogram equalization (HE) is a very widely used and a popular technique for image contrast enhancement. However, it may produce over-enhancement, washed out, and detail loss in some parts of the processed image and thus makes the processed image unnatural. This thesis proposes a novel compensatory histogram equalization method. Originally when applying HE, it needs to map intensities by calculating the cumulative distribution function (CDF) which is derived from the probability density function (PDF). The proposed technique modifies the PDF of an image by using the range distribution function (RDF) which is defined in this thesis as the constraint prior to the process of HE, so that it performs the enhancement on the image without making fatal loss of details. By remapping intensity levels, this approach provides a convenient and effective way to control the enhancement process. The proposed method can be applied on high dynamic range (HDR) images and low dynamic range (LDR) images. To adapt more different kinds of image store technologies, it combines a simple preprocessing method on HDR images. Therefore, this method can be widely used on more kinds of image formats. Finally, experimental results show that the proposed method can achieve better results in terms of Information Fidelity Criterion (IFC) values, the image quality evaluation, than some previous modified histogram-based equalization methods. Further, a fusion algorithm is adopted to combine processed images with different parameters for an optimal result. We believe that it is a strategy worthy for further exploration.
Kumar, Pankaj. "Image Enhancement Using Histogram Equalization and Histogram Specification on Different Color Spaces." Thesis, 2014. http://ethesis.nitrkl.ac.in/5490/1/pankaj_arora_thesis.pdf.
Full textChou, Ching-Yao, and 周敬堯. "Medical Image Enhancement Using Modified Color Histogram Equalization." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/bdpgee.
Full text國立中正大學
醫療資訊管理研究所
103
Image enhancement represents a crucial application in medical imaging. Histogram equalization is one of the image enhancement techniques employed to enhance image contrast, which has become an vital part of general and medical image processing, and has been widely studied and applied. However, traditional histogram equalization achieves poor image enhancement results because it does not consider hue preservation. This study proposes a novel image enhancement method that incorporates hue preservation to address the problem of unpreserved hue in traditional approaches. In addition, this study use the Gabor filter to enhance image details. The results indicated that both methods achieved satisfactory results. Finally, this study proposed methods are applied to retinal and prostate cancer images. This can effectively assist physicians in making professional judgment.
Yu, Chieh-chun, and 余杰群. "Speed-Up Parametric-Oriented and Contast Limited Histogram Equalization." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/6964bw.
Full text國立臺灣科技大學
電機工程系
102
In this thesis, Two regional contrast enhancement schemes are proposed. The first one, termed adaptive parametric-oriented histogram equalization (APOHE), is proposed to effectively generate artifact-free regional contrast enhanced images. First, the grayscale histogram of a specific region is modeled with multiple Gaussian distributions adjusted by two user-defined parameters (α,β) for yielding good contrast. In which, to improve processing efficiency, the required mean and variance of these Gaussian distributions can be rapidly derived through the concept of integral image. In addition, the adaptively corrected POHE (AcPOHE) is also proposed to further improve the contrast with a limited trade-off on computations. Experimental results demonstrate good practical values of the proposed method, and thus it can be applied for various applications such as pattern recognition, biometrics analysis system and surveillance system. Comparing with former speed-oriented methods, good contrast and artifact-free results can be achieved simultaneously. Although regional contrast enhancement methods can obtain richer details as expected, the noises accompanied with the images are enhanced as well, in particular those homogeneous regions. To solve this issue, the contrast limited adaptive histogram equalization (CLAHE) is proposed. The method utilizes the AHE structure with restricted slope of the transformation function for the reduction of noises. Yet, massive computational complexity is its major deficiency. To cope with this, a method termed integral CLAHE (ICLAHE) is proposed to specifically address this issue. In this method, the concept of integral image and the property during pdf clipping are both exploited for less computations from the original O(M^2×P^2) to O((L+1)×P^2) for images of size P×Pand contextual region size M×M. Compared with state-of-the-arts regional contrast enhancement methods, the proposed method is not merely the simplest method of providing less halo effect and noises, but offering richer distinguishable textual details. As a result, a great potential of the proposed method on medical imaging is demonstrated.
Syue, Jin-Yu, and 薛晉宇. "An Efficient Fusion-Based Contrast Limited Histogram Equalization Defogging." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/n32w75.
Full text國立臺灣科技大學
電機工程系
104
Image quality degradation is often introduced by capturing in poor weather conditions such as fog or haze. To overcome this problem, the conventional approaches focus mainly on the enhancement of the overall image contrast. However, because of the unspecified light-source distribution or unsuitable mathematical constraints of the cost functions, quality results are often difficult to achieve. In this thesis, a fusion-based transmission estimation method is introduced to adaptively combine two different transmission models. Specifically, the new fusion weighting scheme and the atmospheric light computed from the Gaussian-based dark channel method improves the estimation of the locations of the light sources. To reduce the flickering effect introduced during the process of frame-based dehazing, a flicker-free module is formulated to alleviate the impacts. The system assessments show this approach is capable of superior defogging and dehazing performance, compared to the state-of-the-art methods, both quantitatively and qualitatively. However, due to the inner constraints of the optical-based defogging, the local image details are usually sacrificed and therefore degrade the practicability. In this thesis, we also proposed another solution to solve this issue. The traditional image enhancement method, contrast limited adaptive histogram equalization (CLAHE), is further exploited by reducing its computational complexity, and then combined with the optical-based defogging method to enhance the image detail while preserving the color fidelity. To solve with the over bright and low contrast issue resulted from the unsuitable block size, an adaptive refinement module based two brightness channels is also proposed. The quantitative and qualitative system assessment shows that the proposed approach achieves a superior defogging performance, and maintains the image naturalness effectively compared to the state-of-art methods, making it the best candidate for various applications.
Lin, Jia-Hwa, and 林佳華. "Edge Preserving for Contrast Enhancement Based on Histogram Equalization." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/37428206236087680201.
Full text國立臺灣科技大學
資訊工程系
98
This thesis provides a new method that preserves edge with contrast enhancement for color images. It is based on edge-based histogram method with in different from traditional histogram-based equalizations. Our method not only has good contrast enhancement effect but also avoids artifact. Fist, we use Sobel operator to detect edges and be use them to update the histogram. Base on updated histogram, the new transformation function is generated. Experimental results show that the proposed method is better than past methods in image quality.
Hsieh, Wen-lung, and 謝文龍. "Study of global contrast enhancement by adaptive histogram equalization." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/64296881409979898418.
Full text雲林科技大學
電機工程系碩士班
98
HDR image of the formation of two approaches, one relying on pieces of the same image with different exposure and then re-capture the visual details of the composition of a single image; Second, contrast is used to expand a single image and then compressed into a high dynamic contrast of the image . Available in only a single high dynamic range images, how to make low-contrast display can honestly show their beautiful natural scenes? In general there are two methods for using a simple contrast change quickly get results, but may lose the bright part or shadow detail; Second, we use the dark part of the Department of Imaging bright layer technology to improve the use of Gaussian filters, the details Although can present, but its slow, heavy and generally paint a sense of visual experience fit. This paper we propose a scalable and compressed the image contrast of the method, in RGB color model, using the control image divided by the coefficient between the global image brightness can change the purpose, in the supplemented by adaptive histogram equalization technique to improve LDR & HDR image. LDR image can be divided into the more dark images and general images, we selected the general image contrast amplification factor, and then another set of coefficients selected so that dark images become invisible acceptable visual images without having to delete. HDR image can be divided into three categories, we were also selected most of the coefficients and set rules so that the face for processing images, can be easily visualized. The proposed methodology is straightforward, and in the experiment, compared with some traditional methods to improve the outcome after, could easily have found that the proposed method can generally get a fine image and HDR image detail, contrast, and consistent visual performance feeling images.
Lo, Yi-Chung, and 羅一中. "Low Cost FPGA Circuit Design of Modified Histogram Equalization." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/h8k752.
Full text國立臺北科技大學
電機工程系所
93
For real-time moving picture contrast enhancement, the existing methods usually require one or more frame buffers to store the intermediate output, which is expensive to implement on a practical hardware system. Thus, this thesis proposes a modified histogram equalization (MHE) algorithm combined with a backward frame translation table to exclude any need of frame buffers. Conventional histogram equalization is widely adopted in contrast enhancement because of its automation in generating the transformation curve. It transforms the image based on the cumulative distribution function of histogram. However, there is a caveat to over-enhance the contrast when the auto-generated transform curve is too sharp due to some high peaks in the histogram. In this case, conventional histogram equalization may result in a harsh, noisy appearance of the output image. Our proposed MHE algorithm employs the mean and standard variation to pickup the high peaks in the histogram and modify histogram before making the translation table. Then the translation table was applied by a low pass filter to smooth on the contiguous values. This algorithm does not only avoid over-enhancement and increase dynamic range of the grey level, but also translates low spatial frequency area smoothly. The proposed algorithm was successfully implemented in an FPGA platform to demonstrate its effectiveness.
Chan, Ai-Ju, and 詹璦如. "Automatic Equal-Separated Histogram Equalization for High-Quality Contrast Enhancement." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/4c5mx6.
Full text國立臺灣科技大學
電子工程系
99
Histogram equalization is an effective technique for contrast enhancement. However, the traditional histogram equalization (HE) method usually results in extreme over-enhancement, which causes the unnatural look and visual artifacts in the processed image. In this thesis, we propose a novel histogram equalization method based on the automatic histogram separation along with the piecewise transformation function. The five enhanced methods including HE, BBHE, DSIHE, RSIHE, and the proposed method are implemented by C language for comparison. We firstly analyse the qualitative and quantitative evaluation to prove our approach is efficient. Afterwards, the power consumption is estimated by using Wattch toolset. Experimental results show that the proposed Automatic Equal-Separated Histogram Equalization (AESHE) not only keeps the shape features of the original histogram but also enhances the contrast effectively even though the processing time and the power consumption have little higher than other methods.
Ting-ChouTsai and 蔡定洲. "A Weight-Based Contrast Enhancement Algorithm by Clustered Histogram Equalization." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/58774256714086442447.
Full textChen, Shin-Anne, and 陳信安. "Exposure-based Weighted Dynamic Histogram Equalization for Image Contrast Enhancement." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/8rpdc8.
Full text國立臺北科技大學
自動化科技研究所
103
Global histogram equalization (GHE) [1] is a common method used for improving image contrast. However, this technique tends to introduce unnecessary visual artifacts and cannot preserve overall brightness. To overcome these problems, many studies have been conducted based on partitioned-histogram (i.e., sub-histogram) equalization. An input image is first divided into sub-images, individual histograms of the sub-images are equalized independently, and all of the sub-images are ultimately integrated into one complete image. For example, exposure-based sub-image histogram equalization (ESIHE) [2] uses an exposure-related threshold to divide the original image into different intensity ranges (horizontal partitioning) and also uses the mean brightness as a threshold to clip the histogram (vertical partitioning). In this paper, a novel method, called exposure-based weighted dynamic histogram equalization (EWDHE), which is an extension of ESIHE, is proposed. This study makes three major contributions to the literature. First, an Otsu-based approach and a clustering performance measure are integrated to determine the optimal number of sub-histograms and the separating points. Second, an exposure-related parameter is used to automatically adapt the contrast limitation, to avoid over-enhancement in some portions of the image. Third, a new weighted scale factor is proposed to resize the sub-histograms, which accounts for the sub-histogram ranges and individual pixel numbers of these ranges. The simulation results indicated that the proposed method outperformed state-of-the-art approaches in terms of contrast enhancement, brightness preservation, and entropy preservation.
Wang, Tsan-Wei, and 王讚緯. "A Voice Conversion System Using Histogram Equalization and Target Frame Selection." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/n6u2b9.
Full text國立臺灣科技大學
資訊工程系
102
Is this thesis, linear multivariate regression (LMR) is adopted for spectrum mapping. In addition, histogram equalization (HEQ) of spectral coefficients and target frame selection (TFS) are included to our system. We intend to solve the problem of spectral over-smoothing encountered by the conventional GMM (Gaussian mixture model) based mapping mechanism in order to improve the converted voice quality. Also, we notice that parallel training sentences are hard to prepare. Therefore we study a method to construct an imitative parallel corpus from a nonparallel corpus. Next, we use a nonparallel corpus to build four voice conversion systems: LMR, LMR+TFS, HEQ+LMR and HEQ+TFS. In the training stage, the method, segment-based frame alignment, is refined to construct the imitative parallel corpus. Then, the corpus is used to train the model parameters for the four voice conversion systems respectively. In the module for HEQ, discrete cepstral coefficients (DCC) are first transformed to principle-component-analysis (PCA) coefficients, and then transformed to cumulative-density-function (CDF) coefficient. In the module for TFS, a DCC vector obtained from LMR mapping and its segment-class number are used to search the corresponding frame set consisting of target-speaker frames belonging to the same segment class. Then, the DCC vector of a frame in the frame set that is nearest to the LMR mapped DCC vector is found and taken to replace the mapped DCC vector. In the conversion stage, it is seen that the HEQ module can decreases the average DCC error, but the TFS module causes the average DCC error being increased. However, the TFS module can really improve the converted voice quality according to the measure of variance ratio. Therefore, the increased average DCC error does not indicate the converted voice quality is worsened.
Wu, Szu-Wei, and 吳思蔚. "Oriented Local Histogram Equalization Features and Its Application to Face Recognition." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/30115814217570120687.
Full text國立臺灣大學
資訊工程學研究所
98
In this paper, we propose a novel image preprocessing method which enhances local oriented contrast of facial images by using oriented local histogram equalization (OLHE), and apply it on face recognition. This method preserves local oriented information by performing local histogram equalization (LHE) with asymmetric kernels. In order to extract the feature on a facial image, we concatenate results which are processed by using eight different orientations of OLHE, called 8-oriented OLHE feature. We expect that the result of face recognitions will be better, because the feature contains local information and orientations, and our inference is proved by the experimental results. The key advantages of the method are its less computational complexity, invariance on illumination changes and can be integrated easily with other face recognition algorithms. We demonstrate the integrations of OLHE with Sparse Representation-based Classification (SRC) which is a holistic face recognition algorithm, and Facial Trait Code (FTC) which is part-based face recognition algorithm. Furthermore, we propose Sparse Representation Facial Trait Code (SRFTC) which is an integration of the FTC and SRC. This novel method combines advantages of these two algorithms and decreases the influence of shorts effectively. Based on the experiments on AR database, we obtain 99.3% recognition rate on holistic face recognition algorithm and 99.8% recognition rate on part-based face recognition algorithm.
Wang, Chao-Hsin, and 王肇薪. "Novel Mean-Shift based Histogram Equalization by using Dynamic Range Suppression." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/74827709482354308199.
Full text國立臺灣科技大學
自動化及控制研究所
98
This paper presents a novel mean-shift based histogram equalization method called the MSHE method. The key idea of the proposed MSHE method is to cluster the pixels on the non-smooth area of the image by using mean-shift algorithm and suppress the dynamic range of the histogram which is composed of the clustered pixels, and then to perform histogram equalization. Further, a contrast enhancement assessment is presented to compare the contrast effect between our method and the other six methods, which are HE, BBHE, DSIHE, RSWHE, SRHE, and GA. Based on three typical test images, experimental results indicate that our proposed MSHE method outperforms the six existing contrast enhancement methods.
Lin, Ping-Hsien, and 林秉賢. "Contrast Enhancement for Digital Color Images Using Variants of Histogram Equalization." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/44473391439777502972.
Full text國立臺灣大學
電機工程學研究所
97
With the prevalence of digital photographing nowadays, more and more consumer electronic devices are installed with photo-shooting functionalities. Most equipment, somehow, is not intended for professional use of photographing, and hence components for this purpose are not delicate enough under economical considerations. This produces pictures that are not fairly acceptable under some extreme shooting conditions, like low-contrasting images, and has to rely on post-processing techniques to improve the quality of these images. In this thesis, we propose two primary methods, Iterative Sub-Histogram Equalization (ISHE) and Statistic-Separate Tri-Histogram Equalization (SSTHE), for contrast enhancement on color images with brightness preservation, and a secondary post-enhancement technique, Gaussian Distributive Filter (GDF), to directly improve contrasts from a micro aspect and reduce brightness quantization of the output histogram from former methods. ISHE generates a high-contrasting image and preserves brightness to some level by iteratively utilizing the BBHE method. SSTHE segments the original histogram into three regions according to the mean and standard deviation of the image brightness, re-ranges spans of each sub-histogram and executes histogram equalization within each scope respectively. GDF locates and disperses over-concentrated values in the histogram with the Gaussian distributive pattern. Since the histogram calculation has already been maturely implemented in hardware, the methods proposed in the thesis could be readily applied on still color images because of their simplicity, as well as low computation requirements make them suitable for consumer electronics.
Huang, Kai-hsiang, and 黃愷翔. "FPGA Implementation Of Histogram Equalization Based Real Time Video Image Processing." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/80500074206894109154.
Full text義守大學
電子工程學系碩士班
97
In this dissertation, a video image processing systems was implemented on an Altera ED2-70 FPGA development board. The video images were captured by a 5 million pixel CMOS digital camera. The first stage of the vision system is the image acquisition stage. The image is acquired and then stored in SDRAM. After the image has been obtained, various methods of processing can be applied to the image to perform the many different vision tasks such as histogram equalization for image enhancement. The image data after processing are output to the LCD touch screen through the LCD touch screen controller. The video system contains five modules, namely image acquisition module (CMOS Sensor Data Capture), image data format conversion and sampling module (Bayer Color Pattern Data To 30-Bit RGB), SDRAM controller module (Multi-Port SDRAM Controller), image processing module (Image process), and LCD touch screen controller (LTM Controller And Data Request). The image capture module is to convert a two-dimensional image to a one-dimension electrical signal that can be handled by the computer. The image data format conversion and the sampling function modules is used for transforming the image data into the the 10bit RGB format. The SDRAM controller module is used to control the function of image data in the access to SDRAM. Image processing module is used for image processing algorithms such as Histogram equalization and the smoothing filter (Average) algorithm. The LCD touch-screen controller function is used for image data output to the LCD touch-screen. The proposed algorithms are applied for the gray-scale images and color images processing. The proposed image enhanced method is simulated by Altera Quartus II. software tool. The verified codes are downloaded on FPGA for hardware verification. The results indicate our proposed video system can obtain a better image quality.
Cheng, Yao-Ren, and 鄭堯仁. "A Study on Optimized Histogram Equalization Methods for Hand Radiograph Segmentation Scheme." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/n67848.
Full text國立中興大學
資訊科學與工程學系所
101
Bone age assessment (BAA) deals with a left hand radiograph for analyzing the growth status of hand bones and estimating the biometric age. The information can be used to determine whether the biometric age and the actual age are inconsistent. BAA can be not only used to detect the growth retardation but also applied for understanding growth potential of children. To analyze the growth status of metacarpal bones, one should first segment the bone area from the rest of the image and then extract the features. Because this estimation is for children, some problems related to the metacarpal bone radiograph, such as over tilted positioning of the palm or low illumination contrast, may make EMROI segmentation more difficult. The above-mentioned problems can be avoided through standardized image processing and procedure, and consequently, BAA accuracy can be enhanced. Yet, the low illumination problem could further decrease the contrast between bones and muscle tissues, reducing the accuracy of EMROI segmentation. This issue can hinder the obtaining of epiphysis and bone age evaluation. Therefore, before segmenting the bone area, an image enhancement approach suitable for the characteristics of radiograph is usually used to improve image quality and thus resolve the above-mentioned problems. Although the conventional enhancement approaches such as histogram is excellent in enhancing contrast of assorted images, for radiograph, these approaches may lead to other problems, such as overexposure and wiping out of details. In relevant research, optimized histogram equalization is the most commonly used approach for illumination contrast improvement. In our method, first used morphology to pre-treat EMROI, and then conducted experiments to find out the threshold values and other relevant parameters suitable for the distal、 middle and proximal phalanges. Afterward, optimized histogram equalization based on the above-mentioned values was applied to enhance the related region of interest. In the experiment, the investigators compared segmentation results between with and without the use of adaptive optimized histogram equalization for image enhancement. Four indexes: accuracy, extraction error rate (ERR), and sensitivity are used for comparison evaluation. It can be found from the experiment results that the image enhancement approach presented in this study can bring better segmentation results, as supported by all data, than those obtained without image enhancement. Therefore, the presented adaptive optimized histogram equalization can enhance the segmentation accuracy.
Lai, I. Ju, and 賴薏如. "An Image Enhancement Algorithm Using Histogram Equalization and Content-Aware Image Segmentation." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/95158538842882740904.
Full text長庚大學
資訊工程學系
98
Illumination is the fundamental of visibility and thus also an important factor to human vision of images. Adjusting illumination to enhance an image is of large practical value because the quality of images is easy to be decreased by unideal light source. Histogram equalization has been frequently used to adjust illumination and verified to improve the overall quality of the target image. However, this approach may also squash available luminance range in a particular local area, and thus dull the image contents in this area. Researchers have introduced matrix-partition based local histogram equalization that equalizes sub images block by block. Although this method enhances the quality of individual sub images, the edges between blocks bring up “chessboard effect” that affects the vision of the whole images. In this thesis, we propose an image segmentation method based on energy analysis. Using edge detection operation we can figure out the gradient and derive the energy of each pixel. Analyzing the energy distribution of the image, we draw the segmentation lines by connecting pixels with maximum energy values. Consequently, the image segmentation is not of matrix shape but with flexible partitions, on which we conduct local histogram equalization. Image segmentation like this is highly relevant to the image contents; therefore, the edges between partitions will not reveal themselves after illumination adjusted. The proposed method can both maintain image vision and improve overall image quality.
Tsai, Shang-nien, and 蔡尚年. "Robust Speech Feature Front-End Processing Techniques Based on Progressive Histogram Equalization." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/70500526488219371940.
Full textChen, Ying-Kang, and 陳映綱. "Color Image Enhancement Using Luminance Histogram Equalization and Tow-Factor Saturation Control." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/59384384736025534597.
Full text中原大學
通訊工程碩士學位學程
103
Contrast is the key point of visual effects. Generally speaking, a greater contrast gives conspicuous images with plentiful color, while a lower contrast results in gray like images. The brightness and contrast are highly dependent. In general, an image with uniform brightness distribution would show a great deal of gray level detail and high dynamic range. Currently on the market and the Internet, many image editing software packages provide automatic image enhancement functions for public use. With this kind of software, just one simple operation can improve the visual quality of the image. However, after a real test we found that, under certain conditions or special circumstances, part of the image enhancement functions does not work effectively, and there is a great chance of causing color problems. In other words, even after an image has been enhanced in brightness, it often induces the problems in hue shifting and poor saturation. Therefore, the idea of the proposed method is to control the change in hue to a minimum and avoid the change of color attributes. We use histogram equalization to increase the dynamic range of luminance and render more options for saturation improvement based on the luminance change using our proposed technique called two-factor reconstruction. Because the psychovisual sense of color cannot be quantified, the proposed method provides an adjustable parameter for users to meet their color satisfaction. For convenience, the input image will be classified into several categories, and the parameter setting guideline for the category of that image will be provided to the users so that they can adjust the parameter to achieve the desired saturation of the output image. The experimental results show that the proposed method successfully improves the performance of image brightness and contrast under the condition of preserving hue information, and simultaneously improve human visual experience by enhancing the saturation. It has a relatively good performance compared with others methods, and the statistics on psychological assessment also shows that using the parameter to adjust the image saturation can fully meet the needs of the users.
Lin, Cheng-Feng, and 林成峯. "Landslide Detection with Multi-Dimensional Histogram Equalization for Multispectral Remotely Sensed Imagery." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/49425200590105182433.
Full text國立中央大學
太空科學研究所
101
Taiwan is located at Circum-Pacific seismic zone; therefore there are a lot of earthquakes in this region. Besides, in this subtropical region, there are usually several typhoons pass through each year. These two natural phenomena may cause serious landslides in the mountainous regions. For landslides hazard assessment, change detection with remote sensing images is an efficient and effective approach. Change detection is one of the most important applications of remote sensing technique, and it provides useful information for various applications, including disaster monitoring, urban development and agriculture management. Compare two images collected at different time from same located, the ground surface change can be detected. However, the difference in spectrum may not solely result from the changes on the ground. The spectrum of the same material in two remote sensing images may not be the same due to the different condition of solar illumination and atmosphere condition while the images were obtained. Therefore, radiometric calibration is required before applying the change detection algorithm and comparing the spectrum. In this study, we propose a multi-dimensional histogram equalization algorithm as a pre-process step for relative calibration. It modifies multispectral images collected under different atmospheric conditions to have similar spectrum for the same land cover. A set of SPOT images is adopted for experiments and results show the proposed method can reduce the misclassification rate.
Liu, Wen-bin, and 劉文彬. "A Fast Approach for Enhancing Sequence of Color Images Using Dichotomy Histogram Equalization." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/02770440886371219731.
Full text元智大學
資訊工程學系
90
This study presents a novel approach for enhancing sequences of color images, using the technique of the dichotomy histogram equalization. Each color pixel in the RGB color space is first transformed to the YCbCr color space. A mapping table is next created, and the techniques of the histogram projection as well as the dichotomy histogram equalization are adopted. After replacing the Y component of the input image, the Cb and Cr components are adjusted according to the color area in YCbCr color space. Finally, the result is transformed back to the RGB color space. Experimental results will reveal the practicability of the proposed methods
Hilger, Florian Erich [Verfasser]. "Quantile based histogram equalization for noise robust speech recognition / von Florian Erich Hilger." 2004. http://d-nb.info/974461431/34.
Full textLi, Wei-Jia, and 李尉嘉. "Enhancing Low-exposure Images Based on Modified Histogram Equalization and Local Contrast Adaptive Enhancement." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/03340322721691697800.
Full text國立中興大學
資訊科學與工程學系
104
Image enhancement methods can effectively improve the visual contents of images, provide us with the better visual experience, and make the computer work more efficiently on images. Therefore, enhanced images tend to be more suitable than original images from the perspective of a particular application. Two common drawbacks usually exist in traditional image enhancement methods: one is over-enhancement and the other is loss of details. In this thesis, we propose an adaptive method to enhance the illumination of color images. The method consists of two steps for performing image enhancement. The first step is to use adjust the content of the image based on image histogram to decrease non-natural points and avoid the situation of over-brightness. The second step applies adaptive local contrast enhancement algorithm to reduce the loss of details. Experimental results show that the brightness and contrast of low-exposure images can be effectively improved by our method. As compared with other methods, our method has better performance in terms of objective measurements such as Contrast, Entropy, Gradient andAbsolute Mean Brightness Error (AMBE).
CHEN, ZHI-FAN, and 陳志凡. "An Image Enhancement Method Based on Bilateral Filtering and Contrast Limited Adaptive Histogram Equalization." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/77m8t9.
Full text國立中正大學
資訊管理系研究所
104
At present, digital photography technology can’t be precisely presented as the scene seen by the human eye since the display device is typically low dynamic range rather than high dynamic range. In other words, the devices are often unable to display the details of shadows and highlights at the same time for high contrast images. If a normal image enhancement method is used to enhance these images, it may result in uneven distribution of image brightness, color distortion or loss of image detail information. Therefore, this study proposes a method to resolve these problems. Starting with the use of the bilateral filter to retain image details, then automatically give the optimum operation parameters through contrast limited adaptive histogram equalization to make appropriate contrast adjustment to the base layer image, so the display of the device can be more similar to the visual quality of the high dynamic range. In the experiments, in comparison with other state-of-art methods, we find that the proposed method is superior to other methods whether in detail information, retention of hue or brightness enhancement. In addition, there is better performance in the objective mathematical evaluation indexes.