Siga este link para ver outros tipos de publicações sobre o tema: Histogram equalization.

Teses / dissertações sobre o tema "Histogram equalization"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Histogram equalization".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Kurak, Charles W. Jr. "Adaptive Histogram Equalization, a Parallel Implementation". UNF Digital Commons, 1990. http://digitalcommons.unf.edu/etd/260.

Texto completo da fonte
Resumo:
Adaptive Histogram Equalization (AHE) has been recognized as a valid method of contrast enhancement. The main advantage of AHE is that it can provide better contrast in local areas than that achievable utilizing traditional histogram equalization methods. Whereas traditional methods consider the entire image, AHE utilizes a local contextual region. However, AHE is computationally expensive, and therefore time-consuming. In this work two areas of computer science, image processing and parallel processing, are combined to produce an efficient algorithm. In particular, the AHE algorithm is implemented with a Multiple-Instruction-Multiple-Data (MIMD) parallel architecture. It is proposed that, as MIMD machines become more powerful and prevalent, this methodology can be applied to not only this particular algorithm, but also to many others in its class.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Yakoubian, Jeffrey Scott. "Adaptive histogram equalization for mammographic image processing". Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/16387.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Kvapil, Jiří. "Adaptivní ekvalizace histogramu digitálních obrazů". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228687.

Texto completo da fonte
Resumo:
The diploma thesis is focused on histogram equalization method and his extension by the adaptive boundary. This thesis contains explanations of basic notions on that histogram equalization method was created. Next part is described the human vision and priciples of his imitation. In practical part of this thesis was created software that makes it possible to use methods of adaptive histogram equalization on real images. At the end is showed some results that was reached.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Gomes, David Menotti. "Contrast enhancement in digital imaging using histogram equalization". Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00470545.

Texto completo da fonte
Resumo:
Nowadays devices are able to capture and process images from complex surveillance monitoring systems or from simple mobile phones. In certain applications, the time necessary to process the image is not as important as the quality of the processed images (e.g., medical imaging), but in other cases the quality can be sacrificed in favour of time. This thesis focuses on the latter case, and proposes two methodologies for fast image contrast enhancement methods. The proposed methods are based on histogram equalization (HE), and some for handling gray-level images and others for handling color images As far as HE methods for gray-level images are concerned, current methods tend to change the mean brightness of the image to the middle level of the gray-level range. This is not desirable in the case of image contrast enhancement for consumer electronics products, where preserving the input brightness of the image is required to avoid the generation of non-existing artifacts in the output image. To overcome this drawback, Bi-histogram equalization methods for both preserving the brightness and contrast enhancement have been proposed. Although these methods preserve the input brightness on the output image with a significant contrast enhancement, they may produce images which do not look as natural as the ones which have been input. In order to overcome this drawback, we propose a technique called Multi-HE, which consists of decomposing the input image into several sub-images, and then applying the classical HE process to each one of them. This methodology performs a less intensive image contrast enhancement, in a way that the output image presented looks more natural. We propose two discrepancy functions for image decomposition which lead to two new Multi-HE methods. A cost function is also used for automatically deciding in how many sub-images the input image will be decomposed on. Experimental results show that our methods are better in preserving the brightness and producing more natural looking images than the other HE methods. In order to deal with contrast enhancement in color images, we introduce a generic fast hue-preserving histogram equalization method based on the RGB color space, and two instances of the proposed generic method. The first instance uses R-red, G-green, and Bblue 1D histograms to estimate a RGB 3D histogram to be equalized, whereas the second instance uses RG, RB, and GB 2D histograms. Histogram equalization is performed using 7 Abstract 8 shift hue-preserving transformations, avoiding the appearance of unrealistic colors. Our methods have linear time and space complexities with respect to the image dimension, and do not require conversions between color spaces in order to perform image contrast enhancement. Objective assessments comparing our methods and others are performed using a contrast measure and color image quality measures, where the quality is established as a weighed function of the naturalness and colorfulness indexes. This is the first work to evaluate histogram equalization methods with a well-known database of 300 images (one dataset from the University of Berkeley) by using measures such as naturalness and colorfulness. Experimental results show that the value of the image contrast produced by our methods is in average 50% greater than the original image value, and still keeping the quality of the output images close to the original
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Gaddam, Purna Chandra Srinivas Kumar, e Prathik Sunkara. "Advanced Image Processing Using Histogram Equalization and Android Application Implementation". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13735.

Texto completo da fonte
Resumo:
Now a days the conditions at which the image taken may lead to near zero visibility for the human eye. They may usually due to lack of clarity, just like effects enclosed on earth’s atmosphere which have effects upon the images due to haze, fog and other day light effects. The effects on such images may exists, so useful information taken under those scenarios should be enhanced and made clear to recognize the objects and other useful information. To deal with such issues caused by low light or through the imaging devices experience haze effect many image processing algorithms were implemented. These algorithms also provide nonlinear contrast enhancement to some extent. We took pre-existed algorithms like SMQT (Successive mean Quantization Transform), V Transform, histogram equalization algorithms to improve the visual quality of digital picture with large range scenes and with irregular lighting conditions. These algorithms were performed in two different method and tested using different image facing low light and color change and succeeded in obtaining the enhanced image. These algorithms helps in various enhancements like color, contrast and very accurate results of images with low light. Histogram equalization technique is implemented by interpreting histogram of image as probability density function. To an image cumulative distribution function is applied so that accumulated histogram values are obtained. Then the values of the pixels are changed based on their probability and spread over the histogram. From these algorithms we choose histogram equalization, MATLAB code is taken as reference and made changes to implement in API (Application Program Interface) using JAVA and confirms that the application works properly with reduction of execution time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Skosan, Marshalleno. "Histogram equalization for robust text-independent speaker verification in telephone environments". Master's thesis, University of Cape Town, 2005. http://hdl.handle.net/11427/5103.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Gatti, Pruthvi Venkatesh, e Krishna Teja Velugubantla. "Contrast Enhancement of Colour Images using Transform Based Gamma Correction and Histogram Equalization". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14424.

Texto completo da fonte
Resumo:
Contrast is an important factor in any subjective evaluation of image quality. It is the difference in visual properties that makes an object distinguishable from other objects and background. Contrast Enhancement method is mainly used to enhance the contrast in the image by using its Histogram. Histogram is a distribution of numerical data in an image using graphical representation. Histogram Equalization is widely used in image processing to adjust the contrast in the image using histograms. Whereas Gamma Correction is often used to adjust luminance in an image. By combining Histogram Equalization and Gamma Correction we proposed a hybrid method, that is used to modify the histograms and enhance contrast of an image in a digital method. Our proposed method deals with the variants of histogram equalization and transformed based gamma correction. Our method is an automatically transformation technique that improves the contrast of dimmed images via the gamma correction and probability distribution of luminance pixels. The proposed method is converted into an android application. We succeeded in enhancing the contrast of an image by using our method and we have tested for different alpha values. Graphs of the gamma for different alpha values are plotted.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Mallampati, Vivek. "Image Enhancement & Automatic Detection of Exudates in Diabetic Retinopathy". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18109.

Texto completo da fonte
Resumo:
Diabetic retinopathy (DR) is becoming a global health concern, which causes the loss of vision of most patients with the disease. Due to the vast prevalence of the disease, the automated detection of the DR is needed for quick diagnoses where the progress of the disease is monitored by detection of exudates changes and their classifications in the fundus retina images. Today in the automated system of the disease diagnoses, several image enhancement methods are used on original Fundus images. The primary goal of this thesis is to make a comparison of three of popular enhancement methods of the Mahalanobis Distance (MD), the Histogram Equalization (HE) and the Contrast Limited Adaptive Histogram Equalization (CLAHE). By quantifying the comparison in the aspect of the ability to detect and classify exudates, the best of the three enhancement methods is implemented to detect and classify soft and hard exudates. A graphical user interface is also adopted, with the help of MATLAB. The results showed that the MD enhancement method yielded better results in enhancement of the digital images compared to the HE and the CLAHE. The technique also enabled this study to successfully classify exudates into hard and soft exudates classification. Generally, the research concluded that the method that was suggested yielded the best results regarding the detection of the exudates; its classification and management can be suggested to the doctors and the ophthalmologists.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Naram, Hari Prasad. "Classification of Dense Masses in Mammograms". OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1528.

Texto completo da fonte
Resumo:
This dissertation material provided in this work details the techniques that are developed to aid in the Classification of tumors, non-tumors, and dense masses in a Mammogram, certain characteristics such as texture in a mammographic image are used to identify the regions of interest as a part of classification. Pattern recognizing techniques such as nearest mean classifier and Support vector machine classifier are also used to classify the features. The initial stages include the processing of mammographic image to extract the relevant features that would be necessary for classification and during the final stage the features are classified using the pattern recognizing techniques mentioned above. The goal of this research work is to provide the Medical Experts and Researchers an effective method which would aid them in identifying the tumors, non-tumors, and dense masses in a mammogram. At first the breast region extraction is carried using the entire mammogram. The extraction is carried out by creating the masks and using those masks to extract the region of interest pertaining to the tumor. A chain code is employed to extract the various regions, the extracted regions could potentially be classified as tumors, non-tumors, and dense regions. Adaptive histogram equalization technique is employed to enhance the contrast of an image. After applying the adaptive histogram equalization for several times which will provide a saturated image which would contain only bright spots of the mammographic image which appear like dense regions of the mammogram. These dense masses could be potential tumors which would need treatment. Relevant Characteristics such as texture in the mammographic image are used for feature extraction by using the nearest mean and support vector machine classifier. A total of thirteen Haralick features are used to classify the three classes. Support vector machine classifier is used to classify two class problems and radial basis function (RBF) kernel is used to find the best possible (c and gamma) values. Results obtained in this research suggest the best classification accuracy was achieved by using the support vector machines for both Tumor vs Non-Tumor and Tumor vs Dense masses. The maximum accuracies achieved for the tumor and non-tumor is above 90 % and for the dense masses is 70.8% using 11 features for support vector machines. Support vector machines performed better than the nearest mean majority classifier in the classification of the classes. Various case studies were performed using two distinct datasets in which each dataset consisting of 24 patients’ data in two individual views. Each patient data will consist of both the cranio caudal view and medio lateral oblique views. From these views the region of interest which could possibly be a tumor, non-tumor, or a dense regions(mass).
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Jomaa, Diala. "Fingerprint Segmentation". Thesis, Högskolan Dalarna, Datateknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:du-4264.

Texto completo da fonte
Resumo:
In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve the quality of the image. Finally, a post processing technique is implemented to counter the undesirable effect in the segmented image. Fingerprint recognition system is one of the oldest recognition systems in biometrics techniques. Everyone have a unique and unchangeable fingerprint. Based on this uniqueness and distinctness, fingerprint identification has been used in many applications for a long period. A fingerprint image is a pattern which consists of two regions, foreground and background. The foreground contains all important information needed in the automatic fingerprint recognition systems. However, the background is a noisy region that contributes to the extraction of false minutiae in the system. To avoid the extraction of false minutiae, there are many steps which should be followed such as preprocessing and enhancement. One of these steps is the transformation of the fingerprint image from gray-scale image to black and white image. This transformation is called segmentation or binarization. The aim for fingerprint segmentation is to separate the foreground from the background. Due to the nature of fingerprint image, the segmentation becomes an important and challenging task. The proposed algorithm is applied on FVC2000 database. Manual examinations from human experts show that the proposed algorithm provides an efficient segmentation results. These improved results are demonstrating in diverse experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Pehrson, Skidén Ottar. "Automatic Exposure Correction And Local Contrast Setting For Diagnostic Viewing of Medical X-ray Images". Thesis, Linköping University, Department of Biomedical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-56630.

Texto completo da fonte
Resumo:

To properly display digital X-ray images for visual diagnosis, a proper display range needs to be identified. This can be difficult when the image contains collimators or large background areas which can dominate the histograms. Also, when there are both underexposed and overexposed areas in the image it is difficult to display these properly at the same time. The purpose of this thesis is to find a way to solve these problems. A few different approaches are evaluated to find their strengths and weaknesses. Based on Local Histogram Equalization, a new method is developed to put various constraints on the mapping. These include alternative ways to perform the histogram calculations and how to define the local histograms. The new method also includes collimator detection and background suppression to keep irrelevant parts of the image out of the calculations. Results show that the new method enables proper display of both underexposed and overexposed areas in the image simultaneously while maintaining the natural look of the image. More testing is required to find appropriate parameters for various image types.

Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Saikaley, Andrew Grey. "Imaging, characterization and processing with axicon derivatives". Thesis, Laurentian University of Sudbury, 2013. https://zone.biblio.laurentian.ca/dspace/handle/10219/2039.

Texto completo da fonte
Resumo:
Axicons have been proposed for imaging applications since they offer the advantage of extended depth of field (DOF). This enhanced DOF comes at the cost of degraded image quality. Image processing has been proposed to improve the image quality. Initial efforts were focused on the use of an axicon in a borescope thereby extending depth of focus and eliminating the need for a focusing mechanism. Though promising, it is clear that image processing would lead to improved image quality. This would also eliminate the need, in certain applications, for a fiber optic imaging bundle as many modern day video borescopes use an imaging sensor coupled directly to the front end optics. In the present work, three types of refractive axicons are examined: a linear axicon, a logarithmic axicon and a Fresnel axicon. The linear axicon offers the advantage of simplicity and a significant amount of scientific literature including the application of image restoration techniques. The Fresnel axicon has the advantage of compactness and potential low cost of production. As no physical prior examples of the Fresnel axicons were available for experimentation until recently, very little literature exists. The logarithmic axicon has the advantage of nearly constant longitudinal intensity distribution and an aspheric design producing superior pre-processed images over the aforementioned elements. Point Spread Functions (PSFs) for each of these axicons have been measured. These PSFs form the basis for the design of digital image restoration filters. The performance of these three optical elements and a number of restoration techniques are demonstrated and compared.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Engelhardt, Erik, e Simon Jäger. "An evaluation of image preprocessing for classification of Malaria parasitization using convolutional neural networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260090.

Texto completo da fonte
Resumo:
In this study, the impact of multiple image preprocessing methods on Convolutional Neural Networks (CNN) was studied. Metrics such as accuracy, precision, recall and F1-score (Hossin et al. 2011) were evaluated. Specifically, this study is geared towards malaria classification using the data set made available by the U.S. National Library of Medicine (Malaria Datasets n.d.). This data set contains images of thin blood smears, where uninfected and parasitized blood cells have been segmented. In the study, 3 CNN models were proposed for the parasitization classification task. Each model was trained on the original data set and 4 preprocessed data sets. The preprocessing methods used to create the 4 data sets were grayscale, normalization, histogram equalization and contrast limited adaptive histogram equalization (CLAHE). The impact of CLAHE preprocessing yielded a 1.46% (model 1) and 0.61% (model 2) improvement over the original data set, in terms of F1-score. One model (model 3) provided inconclusive results. The results show that CNN’s can be used for parasitization classification, but the impact of preprocessing is limited.
I denna studie studerades effekten av flera bildförbehandlingsmetoder på Convolutional Neural Networks (CNN). Mätvärden såsom accuracy, precision, recall och F1-score (Hossin et al. 2011) utvärderades. Specifikt är denna studie inriktad på malariaklassificering med hjälp av ett dataset som tillhandahålls av U.S. National Library of Medicine (Malaria Datasets n.d.). Detta dataset innehåller bilder av tunna blodutstryk, med segmenterade oinfekterade och parasiterade blodceller. I denna studie föreslogs 3 CNN-modeller för parasiteringsklassificeringen. Varje modell tränades på det ursprungliga datasetet och 4 förbehandlade dataset. De förbehandlingsmetoder som användes för att skapa de 4 dataseten var gråskala, normalisering, histogramutjämning och kontrastbegränsad adaptiv histogramutjämning (CLAHE). Effekten av CLAHE-förbehandlingen gav en förbättring av 1.46% (modell 1) och 0.61% (modell 2) jämfört med det ursprungliga datasetet, vad gäller F1-score. En modell (modell 3) gav inget resultat. Resultaten visar att CNN:er kan användas för parasiteringsklassificering, men effekten av förbehandling är begränsad.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

GAJJELA, VENKATA SARATH, e SURYA DEEPTHI DUPATI. "Mobile Application Development with Image Applications Using Xamarin". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15838.

Texto completo da fonte
Resumo:
Image enhancement improves an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. Image enhancement techniques have been widely used in many applications of image processing where the subjective quality of images is important for human interpretation. In many cases, the images have lack of clarity and have some effects on images due to fog, low light and other daylight effects exist. So, the images which have these scenarios should be enhanced and made clear to recognize the objects clearly. Histogram-based image enhancement technique is mainly based on equalizing the histogram of the image and increasing the dynamic range corresponding to the image. The Histogram equalization algorithm was performed and tested using different images facing the low light, fog images and colour contrast and succeeded in obtaining enhanced images. This technique is implemented by averaging the histogram values as the probability density function. Initially, we have worked with the MATLAB code on Histogram Equalization and made changes to implement an Application Program Interface i.e., API using Xamarin software. The mobile application developed using Xamarin software works efficiently and has less execution time when compared to the application developed in Android Studio. Debugging of the application is successfully done in both Android and IOS versions. The focus of this thesis is to develop a mobile application on Image enhancement using Xamarin on low light, foggy images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Thai, Ba chien. "Tone Mapping Operators for High Dynamic Range Images". Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD082.

Texto completo da fonte
Resumo:
La conversion d'une image à grande gamme dynamique (HDR) en une image à faible gamme dynamique est étudiée de façon à garantir un rendu visuel de cette dernière de bonne qualité. La première contribution concerne le rehaussement de contraste de l'image mappée en utilisant une fonction linéaire par morceaux pour que l'égalisation d'histogramme soit ajustée à la "s-courbe" d'adaptation du système visuel humain. la deuxième et troisième contributions portent sur la préservation des détails de l'image HDR. Des approches multirésolution séparables et non séparables, basées sur des stratégies non oscillatoires, prenant en compte les singularités de l'image HDR dans la dérivation du modèle mathématique, sont proposées. La quatrième contribution non seulement préserve les détails mais également améliore le contraste de l'image HDR mappée. Un schéma de lifting séparable "presque optimal" est proposé. Il s'appuie sur une étape de prédiction adaptative des coefficients. Cette dernière repose sur une combinaison linéaire pondérée des coefficients voisins pour extraire les détails pertinents sur l'image HDR à chaque niveau de résolution. Un mappage linéaire par morceaux est ensuite appliqué à la reconstruction grossière. les résultats de simulation fournissent de bonnes performances en termes de qualité visuelle et de métrique TMQI (Tone Mapped Quality Index) par rapport aux approches de mise en correspondance tonale classiques. l'impact des paramètres TMQI sur la qualité visuelle des images mappées est discuté. Les paramètres proposés montrent une forte corrélation entre la métrique modifiée et la note moyenne d'opinion
He conversion of High Dynamic Range (HDR) image into Low Dynamic Range (LDR) image is investigated so that the visual rendering of the latter is of good quality. The first contribution focused on the contrast enhancement of the tone mapped image using a piecewise linear function as a non-uniform histogram equalization adjustment to mode! the "s-shaped" curve of the human visual adaptation. The second and third contributions are concerned with the details preservation of the HDR image on the tone mapped image. Separable and non-separable multiresolution approaches based on essential non-oscillatory strategies, taking into account the HDR image singularities in the mathematical mode! derivation, are proposed. The fourth contribution not only preserves details but also enhances the contrast of the HDR tone mapped image. A separable "near optimal" lifting scheme using an adaptive powerful prediction step is proposed. The latter relies on a linear weighted combination depending on the neighbouring coefficients to extract the relevant fin est details on the HDR image at each resolution level. A piecewise linear mapping is then applied on the coarse reconstruction. Simulation results provide good performance, both in terms of visual quality and Tone Mapped Quality Index (TMQI) metric, compared to existing competitive tone mapping approaches. The impact of the TMQI parameters on the visual quality of the tone mapped images is discussed. The proposed parameters show a strong correlation between the modified metric and
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Мойсей, Павло Ігорович, e Pavlo Moisei. "Метод обробки зображень для верифікації особи в телекомунікаційних системах". Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2020. http://elartu.tntu.edu.ua/handle/lib/33278.

Texto completo da fonte
Resumo:
Дипломну роботу присвячено обґрунтованню методу обробки зображення для верифікації особи в телекомунікаційних системах з використанням фільтра Лапласа та еквалізації гістограми зображення. Проведено аналіз методів розпізнавання особи, представлено структурні схеми систем верифікації особи в телекомунікаційних мережах для ідентифікації особи та обґрунтовано метод обробки зображення, що дає можливість збільшити достовірність та швидкодію систем верифікації.
The diploma work is devoted to the substantiation of method of processing image for identity verification in telecommunication systems using the Laplace filter and the equalization of the image histogram. The analysis of face recognition methods is carried out, the structural schemes of face verification systems in telecommunication networks for face identification are presented and the method of image processing is substantiated, which gives an opportunity to increase the reliability and speed of verification systems.
ВСТУП 9 РОЗДІЛ 1. АНАЛІТИЧНА ЧАСТИНА 11 1.1. Задачарозпізнавання особи в телекомунікаційних системах 11 1.2. Задача верифікації особи в телекомунікаційних системах 15 1.3. Класифікація методів верифікації особи 17 1.4. Принципи верифікації особи 18 1.5. Висновки до розділу 1 20 РОЗДІЛ 2. ОСНОВНА ЧАСТИНА 21 2.1. Методи верифікації особи за параметрами обличчя 21 2.2. Алгоритм роботи методів розпізнавання особи 22 2.3. Системи верифікації особи в телекомунікаційних мережах 30 2.4. Методи обробки зображення 37 2.5. Висновки до розділу 2 54 РОЗДІЛ 3. НАУКОВО-ДОСЛІДНА ЧАСТИНА 55 3.1. Обгрунтування методу обробки зображення 55 3.1.1.Еквалізація гістограми 56 3.1.2. Фільтр Лапласа 57 3.2.Реєстрація експериментальних даних 59 3.3.Обробка зображення для верифікації особи 62 3.4.Аналіз та оцінка зображення 66 3.5. Висновки до розділу 3 73 РОЗДІЛ 4.ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 75 4.1. Охорона праці під час роботи з персональним комп’ютером при виконанні наукового дослідження 75 4.2. Забезпечення надійності роботи телекомунікаційних систем до дії уражаючих факторів надзвичайних ситуацій 77 4.3 Висновки до розділу 4 84 ВИСНОВКИ 85 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 87 Додаток А Копія тези конференції 89 Додаток Б Лістинг програми 92
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Martišek, Karel. "Adaptivní filtry pro 2-D a 3-D zpracování digitálních obrazů". Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-234015.

Texto completo da fonte
Resumo:
Práce se zabývá adaptivními filtry pro vizualizaci obrazů s vysokým rozlišením. V teoretické části je popsán princip činnosti konfokálního mikroskopu a matematicky korektně zaveden pojem digitální obraz. Pro zpracování obrazů je volen jak frekvenční přístup (s využitím 2-D a 3-D diskrétní Fourierovy transformace a frekvenčních filtrů), tak přístup pomocí digitální geometrie (s využitím adaptivní ekvalizace histogramu s adaptivním okolím). Dále jsou popsány potřebné úpravy pro práci s neideálními obrazy obsahujícími aditivní a impulzní šum. Závěr práce se věnuje prostorové rekonstrukci objektů na základě jejich optických řezů. Veškeré postupy a algoritmy jsou i prakticky zpracovány v softwaru, který byl vyvinut v rámci této práce.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Martišek, Karel. "Adaptive Filters for 2-D and 3-D Digital Images Processing". Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-234150.

Texto completo da fonte
Resumo:
Práce se zabývá adaptivními filtry pro vizualizaci obrazů s vysokým rozlišením. V teoretické části je popsán princip činnosti konfokálního mikroskopu a matematicky korektně zaveden pojem digitální obraz. Pro zpracování obrazů je volen jak frekvenční přístup (s využitím 2-D a 3-D diskrétní Fourierovy transformace a frekvenčních filtrů), tak přístup pomocí digitální geometrie (s využitím adaptivní ekvalizace histogramu s adaptivním okolím). Dále jsou popsány potřebné úpravy pro práci s neideálními obrazy obsahujícími aditivní a impulzní šum. Závěr práce se věnuje prostorové rekonstrukci objektů na základě jejich optických řezů. Veškeré postupy a algoritmy jsou i prakticky zpracovány v softwaru, který byl vyvinut v rámci této práce.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Nguyen, Tan-Sy. "A smart system for processing and analyzing gastrointestinal abnormalities in wireless capsule endoscopy". Electronic Thesis or Diss., Paris 13, 2023. http://www.theses.fr/2023PA131052.

Texto completo da fonte
Resumo:
Dans cette thèse, nous abordons les défis liés à l'identification et au diagnostic des lésions pathologiques dans le tractus gatro-intestinal (GI). L'analyse des quantités massives d'informations visuelles obtenues par une capsule vidéo-endoscopique (CVE) qui est un excellent outil pour visualiser et examiner le tractus GI y compris l'intestin grêle, représente une charge considérable pour les cliniciens, entraînant un risque accru de diagnostic erroné. Afin de palier à ce problème, nous développons un système intelligent capable de détecter et d'identifier automatiquement diverses pathologies gastro-intestinales. Cependant, la qualité limitée des images acquises en raison de distorsions telles que le bruit, le flou et l'éclairement non uniforme constitue un obstacle significatif. Par conséquent, les techniques de prétraitement des images jouent un rôle crucial dans l'amélioration de la qualité des images acquises, facilitant ainsi les tâches de haut niveau telles que la détection et la classification des anomalies. Afin de résoudre les problèmes liés à la qualité limitée des images causée par les distorsions mentionnées précédemment, plusieurs nouveaux algorithmes d'apprentissage ont été proposés. Plus précisément, les avancées récentes dans le domaine de la restauration et de l'amélioration de la qualité des images reposent sur des approches d'apprentissage qui nécessitent des paires d'images déformées et de référence pour l'entraînement. Cependant, en ce qui concerne la CVE, un défi significatif se pose en raison de l'absence d'une base de données dédiée pour évaluer la qualité des images. À notre connaissance, il n'existe actuellement aucune base de données spécialisée conçu spécifiquement pour évaluer la qualité vidéo en CVE. Par conséquent, en réponse à la nécessité d'une base de données complète d'évaluation de la qualité vidéo, nous proposons tout d'abord la "Quality-Oriented Database for Video Capsule Endoscopy" (QVCED). Ensuite, nos résultats montrent que l'évaluation de la gravité des distorsions améliore significativement l'efficacité de l'amélioration de l'image, en particulier en cas d'illumination inégale. À cette fin, nous proposons une nouvelle métrique dédiée à l'évaluation et à la quantification de l'éclairage inégal dans les images laparoscopiques ou par CVE, en extrayant l'éclairement de l'arrière-plan de l'image et en tenant compte de l'effet de la mise en égalisation de l'histogramme. Notre métrique démontrant sa supériorité et sa performance compétitive par rapport aux méthodes d'évaluation de la qualité d'image avec référence complète (FR-IQA).Après avoir effectué l'étape d'évaluation, nous développons une méthode d'amélioration de la qualité d'image visant à améliorer la qualité globale des images. Le nouvel algorithme est basé sur un mécanisme de l'attention croisée, qui permet d'établir l'interaction d'information entre la tâche de l'extraction du niveau de distorsion et de la localisation de régions dégradées. En employant cet algorithme, nous sommes en mesure d'identifier et de cibler précisément les zones spécifiques des images affectées par les distorsions. Ainsi, cet algorithme permet le traitement approprié adapté à chaque région dégradée, améliorant ainsi efficacement la qualité de l'image. Suite à l'amélioration de la qualité de l'image, des caractéristiques visuelles sont extraites et alimentées dans un classificateur pour fournir un diagnostic par classification. La difficulté dans le domaine de CVE est qu'une partie significative des données reste non étiquetée. Pour relever ce défi, nous avons proposé une méthode efficace basée sur l'approche d'apprentissage auto-supervisé ("Self-Supervised Learning" ou SSL en anglais) afin d'améliorer les performances de la classification. La méthode proposée, utilisant le SSL basé sur l'attention, ont réussi à résoudre le problème des données étiquetées limitées couramment rencontré dans la littérature existante
In this thesis, we address the challenges associated with the identification and diagnosis of pathological lesions in the gastrointestinal (GI) tract. Analyzing massive amounts of visual information obtained by Wireless Capsule Endsocopy (WCE) which is an excellent tool for visualizing and examining the GI tract (including the small intestine), poses a significant burden on clinicians, leading to an increased risk of misdiagnosis.In order to alleviate this issue, we develop an intelligent system capable of automatically detecting and identifying various GI disorders. However, the limited quality of acquired images due to distortions such as noise, blur, and uneven illumination poses a significant obstacle. Consequently, image pre-processing techniques play a crucial role in improving the quality of captured frames, thereby facilitating subsequent high-level tasks like abnormality detection and classification. In order to tackle the issues associated with limitations in image quality caused by the aforementioned distortions, novel learning-based algorithms have been proposed. More precisely, recent advancements in the realm of image restoration and enhancement techniques rely on learning-based approaches that necessitate pairs of distorted and reference images for training. However, a significant challenge arises in WCE which is an excellent tool for visualizing and diagnosing GI disorders, due to the absence of a dedicated dataset for evaluating image quality. To the best of our knowledge, there currently exists no specialized dataset designed explicitly for evaluating video quality in WCE. Therefore, in response to the need for an extensive video quality assessment dataset, we first introduce the "Quality-Oriented Database for Video Capsule Endoscopy" (QVCED).Subsequently, our findings show that assessing distortion severity significantly improves image enhancement effectiveness, especially in the case of uneven illumination. To this end, we propose a novel metric dedicated to the evaluation and quantification of uneven illumination in laparoscopic or WCE images, by extracting the image's background illuminance and considering the mapping effect of Histogram Equalization. Our metric outperforms some state-of-the-art No-Reference Image Quality Assessment (NR-IQA) methods, demonstrating its superiority and competitive performance compared to Full-Reference IQA (FR-IQA) methods.After conducting the assessment step, we proceed to develop an image quality enhancement method aimed at improving the overall quality of the images. This is achieved by leveraging the cross-attention algorithm, which establishes a comprehensive connection between the extracted distortion level and the degraded regions within the images. By employing this algorithm, we are able to precisely identify and target the specific areas in the images that have been affected by distortions. This allows an appropriate enhancement tailored to each degraded region, thereby effectively improving the image quality.Following the improvement of image quality, visual features are extracted and fed into a classifier to provide a diagnosis through classification. The challenge in the WCE domain is that a significant portion of the data remains unlabeled. To overcome this challenge, we have developed an efficient method based on self-supervised learning (SSL) approach to enhance the performance of classification. The proposed method, utilizing attention-based SSL, has successfully addressed the issue of limited labeled data commonly encountered in the existing literature
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Honório, Tatiane Cruz de Souza. "Modelos de compressão de dados para classificação e segmentação de texturas". Universidade Federal da Paraí­ba, 2010. http://tede.biblioteca.ufpb.br:8080/handle/tede/6044.

Texto completo da fonte
Resumo:
Made available in DSpace on 2015-05-14T12:36:26Z (GMT). No. of bitstreams: 1 parte1.pdf: 2704137 bytes, checksum: 1bc9cc5c3099359131fb11fa1878c22f (MD5) Previous issue date: 2010-08-31
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This work analyzes methods for textures images classification and segmentation using lossless data compression algorithms models. Two data compression algorithms are evaluated: the Prediction by Partial Matching (PPM) and the Lempel-Ziv-Welch (LZW) that had been applied in textures classification in previous works. The textures are pre-processed using histogram equalization. The classification method is divided into two stages. In the learning stage or training, the compression algorithm builds statistical models for the horizontal and the vertical structures of each class. In the classification stage, samples of textures to be classified are compressed using models built in the learning stage, sweeping the samples horizontally and vertically. A sample is assigned to the class that obtains the highest average compression. The classifier tests were made using the Brodatz textures album. The classifiers were tested for various contexts sizes (in the PPM case), samples number and training sets. For some combinations of these parameters, the classifiers achieved 100% of correct classifications. Texture segmentation process was made only with the PPM. Initially, the horizontal models are created using eight textures samples of size 32 x 32 pixels for each class, with the PPM context of a maximum size 1. The images to be segmented are compressed by the models of classes, initially in blocks of size 64 x 64 pixels. If none of the models achieve a compression ratio at a predetermined interval, the block is divided into four blocks of size 32 x 32. The process is repeated until a model reach a compression ratio in the range of the compression ratios set for the size of the block in question. If the block get the 4 x 4 size it is classified as belonging to the class of the model that reached the highest compression ratio.
Este trabalho se propõe a analisar métodos de classificação e segmentação de texturas de imagens digitais usando algoritmos de compressão de dados sem perdas. Dois algoritmos de compressão são avaliados: o Prediction by Partial Matching (PPM) e o Lempel-Ziv-Welch (LZW), que já havia sido aplicado na classificação de texturas em trabalhos anteriores. As texturas são pré-processadas utilizando equalização de histograma. O método de classificação divide-se em duas etapas. Na etapa de aprendizagem, ou treinamento, o algoritmo de compressão constrói modelos estatísticos para as estruturas horizontal e vertical de cada classe. Na etapa de classificação, amostras de texturas a serem classificadas são comprimidas utilizando modelos construídos na etapa de aprendizagem, varrendo-se as amostras na horizontal e na vertical. Uma amostra é atribuída à classe que obtiver a maior compressão média. Os testes dos classificadores foram feitos utilizando o álbum de texturas de Brodatz. Os classificadores foram testados para vários tamanhos de contexto (no caso do PPM), amostras e conjuntos de treinamento. Para algumas das combinações desses parâmetros, os classificadores alcançaram 100% de classificações corretas. A segmentação de texturas foi realizada apenas com o PPM. Inicialmente, são criados os modelos horizontais usados no processo de segmentação, utilizando-se oito amostras de texturas de tamanho 32 x 32 pixels para cada classe, com o contexto PPM de tamanho máximo 1. As imagens a serem segmentadas são comprimidas utilizando-se os modelos das classes, inicialmente, em blocos de tamanho 64 x 64 pixels. Se nenhum dos modelos conseguir uma razão de compressão em um intervalo pré-definido, o bloco é dividido em quatro blocos de tamanho 32 x 32. O processo se repete até que algum modelo consiga uma razão de compressão no intervalo de razões de compressão definido para o tamanho do bloco em questão, podendo chegar a blocos de tamanho 4 x 4 quando o bloco é classificado como pertencente à classe do modelo que atingiu a maior taxa de compressão.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Wang, Chu-Hsuan, e 王楚軒. "Robust indoor localization using histogram equalization". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5vkxfq.

Texto completo da fonte
Resumo:
博士
元智大學
電機工程學系
104
Indoor positioning systems have received increasing attention for supporting locationbased services in indoor environments. Received Signal Strength (RSS), mostly utilized ngerprinting systems in Wi-Fi, is known to be unreliable due to environmental and hardware eects. The PHY layer information about channel quality known as Channel State Information (CSI) can be used due to its frequency diversity (OFDM sub-carriers) and spatial diversity (multiple antennas). The extension of CSI dimensions causes over-tting should be considered. This paper proposes two approaches based on histogram equalization (HEQ) and information theoretic learning (ITL) to compensate for hardware variation, orientation mismatch and over-tting problems in robust localization system. The proposed method involves converting the temporal{spatial radio signal strength into a reference function (i.e., equalizing the histogram). This paper makes two principal contributions: First, the equalized RF signal is capable of improving the robustness of location estimation, and second, ITL greater discriminative components provides increased exibility in determining the number of required components and achieves better computational eciency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Bhubaneswari, M. "Optimized Histogram Equalization for Image Enhancement". Thesis, 2015. http://ethesis.nitrkl.ac.in/6802/1/Optimized_Bhubaneswari_2015.pdf.

Texto completo da fonte
Resumo:
In this project, Image Enhancement has been achieved by performing Histogram Equalization that uses optimization algorithms to optimize parameters.Histogram equalization is a spatial domain image enhancement technique, which effectively enhances the contrast of an image.However, while it takes care of contrast enhancement,it does not consider the abrupt changes in the image brightness due to which image brightness is not preserved.Hence,in this project a modified histogram equalization technique using optimization algorithm has been proposed, which takes care of contrast enhancement while ensuring brightness preservation.The idea used here is to first ,section the data image histogram into two, utilizing otsu's limit .Then an arrangement of streamlined measuring requirements are formed and connected on both the sub-images. Then, the sub-images are evened out freely and their union creates the contrast enhanced , brightness preserved output image .Here we have used three Optimization Algorithms for finding the optimal constraints . First , Genetic Algorithm(GA) has been used , to optimise the constraints .Second , Particle Swarm Optimization (PSO) has been used and third ,a Hybrid PSO Optimization Algorithm has been used for the same .Then the results produced by the above algorithms are compared to find out which one outperforms the other , by comparing various parameters like Discrete Entropy , Mean , Number of Generations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Chung, Xin-fang, e 鍾欣芳. "Simulation of Histogram Equalization for Classification Problem". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/6xk7u8.

Texto completo da fonte
Resumo:
碩士
國立臺灣科技大學
資訊管理系
99
Histogram equalization (HEQ) is a technology for improving the darkness and the brightness of the image by adjusting the gray levels based on the cumulative distribution function (CDF). In recent years, this method has been applied to different issues, including robust speech recognition for solving the mismatch between the noisy speech and the clean speech, and natural language processing for the cross-database problem. This paper analyzed how histogram equalization may influence a simple classification problem by simulation. The results showed the rough curve of CDF caused by insufficient data would lead to the poor mapping between training and test data and degrade the performance. Direct and indirect operations of histogram equalization achieve similar performance for linear or non-linear transformation, while the performance of the indirect one is more sensitive to type of classifiers. With sufficient amount of training data, HEQ and mean-standard deviation weight (MSW) can achieve compatible performances for linear transformation, while HEQ appears superior for nonlinear transformation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Jhan, Shih-Sian, e 詹士賢. "Sobel Histogram Equalization for Image Contrast Enhancement". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/76534008994939627485.

Texto completo da fonte
Resumo:
碩士
立德管理學院
應用資訊研究所
95
Contrast enhancement is an important technique for image processing. Although many contrast enhancement methods had been proposed, these designed methods do not focus on the edge quality of image. In this study, the sobel histogram equalization (SHE) is proposed to enhance the contrast of image. In SHE, the image is divided into two regions, edge and non-edge, by using the sobel edge detector. The contrast of these two regions can be individually enhanced, and then these two regions can be merged into a whole image by the histogram equalization. In our experiments, SHE outperforms other methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Chuang, Chialung, e 莊佳龍. "Piece-Wise Histogram Equalization For Image Enhancement". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/03838447444191654772.

Texto completo da fonte
Resumo:
碩士
義守大學
資訊工程學系碩士在職專班
100
Histogram equalization (HE), which has been intensively studied for decades, is one of the most popular technologies because it can produce high performance results without complex parameters. Histogram equalization is widely used for a variety of image applications, for instance, radar signal processing and medical image processing. However, HE suffers from choosing a proper dynamic range, which could over-enhance images and causes poor visual quality. Common HE methods use piece-wise algorithm that decomposes input image into N sub-images, and then enhances the sub-images individually. Result image is a combination of the enhanced sub-images. However, existing piece-wise algorithms do not guarantee successful enhancement. In this thesis, we propose a novel piece-wise algorithm that uses ‘’unilateralism’’ method to enhance the image details without loosing the original brightness of the source image. Results indicate the proposed method provides efficient enhancement. Furthermore, the proposed method is extended to enhance color images. Simulation results are demonstrated and discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Yi-ShanLin e 林怡珊. "Partitioned Dynamic Range Histogram and Its Application to Obtain Better Histogram Equalization". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/37273415144723906115.

Texto completo da fonte
Resumo:
碩士
國立成功大學
電腦與通信工程研究所
101
Image contrast enhancement algorithms have been designed to adjust contrast conforming to human visual perception. Histogram equalization (HE) is a very widely used and a popular technique for image contrast enhancement. However, it may produce over-enhancement, washed out, and detail loss in some parts of the processed image and thus makes the processed image unnatural. This thesis proposes a novel compensatory histogram equalization method. Originally when applying HE, it needs to map intensities by calculating the cumulative distribution function (CDF) which is derived from the probability density function (PDF). The proposed technique modifies the PDF of an image by using the range distribution function (RDF) which is defined in this thesis as the constraint prior to the process of HE, so that it performs the enhancement on the image without making fatal loss of details. By remapping intensity levels, this approach provides a convenient and effective way to control the enhancement process. The proposed method can be applied on high dynamic range (HDR) images and low dynamic range (LDR) images. To adapt more different kinds of image store technologies, it combines a simple preprocessing method on HDR images. Therefore, this method can be widely used on more kinds of image formats. Finally, experimental results show that the proposed method can achieve better results in terms of Information Fidelity Criterion (IFC) values, the image quality evaluation, than some previous modified histogram-based equalization methods. Further, a fusion algorithm is adopted to combine processed images with different parameters for an optimal result. We believe that it is a strategy worthy for further exploration.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Kumar, Pankaj. "Image Enhancement Using Histogram Equalization and Histogram Specification on Different Color Spaces". Thesis, 2014. http://ethesis.nitrkl.ac.in/5490/1/pankaj_arora_thesis.pdf.

Texto completo da fonte
Resumo:
Image Enhancement is one of the important requirements in Digital Image Processing which is important in making an image useful for various applications which can be seen in the areas of Digital photography, Medicine, Geographic Information System, Industrial Inspection , Law Enforcement and many more Digital Image Applications. Image Enhancement is used to improve the quality of poor images. The focus of this paper is an attempt to improve the quality of digital images using Histogram Equalization and Histogram Specification. In this paper we are applying Histogram Equalization on color images with Different Color Space like RGB ,HSV ,YIQ and Histogram Specification on gray Scale images and color images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Chou, Ching-Yao, e 周敬堯. "Medical Image Enhancement Using Modified Color Histogram Equalization". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/bdpgee.

Texto completo da fonte
Resumo:
碩士
國立中正大學
醫療資訊管理研究所
103
Image enhancement represents a crucial application in medical imaging. Histogram equalization is one of the image enhancement techniques employed to enhance image contrast, which has become an vital part of general and medical image processing, and has been widely studied and applied. However, traditional histogram equalization achieves poor image enhancement results because it does not consider hue preservation. This study proposes a novel image enhancement method that incorporates hue preservation to address the problem of unpreserved hue in traditional approaches. In addition, this study use the Gabor filter to enhance image details. The results indicated that both methods achieved satisfactory results. Finally, this study proposed methods are applied to retinal and prostate cancer images. This can effectively assist physicians in making professional judgment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Yu, Chieh-chun, e 余杰群. "Speed-Up Parametric-Oriented and Contast Limited Histogram Equalization". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/6964bw.

Texto completo da fonte
Resumo:
碩士
國立臺灣科技大學
電機工程系
102
In this thesis, Two regional contrast enhancement schemes are proposed. The first one, termed adaptive parametric-oriented histogram equalization (APOHE), is proposed to effectively generate artifact-free regional contrast enhanced images. First, the grayscale histogram of a specific region is modeled with multiple Gaussian distributions adjusted by two user-defined parameters (α,β) for yielding good contrast. In which, to improve processing efficiency, the required mean and variance of these Gaussian distributions can be rapidly derived through the concept of integral image. In addition, the adaptively corrected POHE (AcPOHE) is also proposed to further improve the contrast with a limited trade-off on computations. Experimental results demonstrate good practical values of the proposed method, and thus it can be applied for various applications such as pattern recognition, biometrics analysis system and surveillance system. Comparing with former speed-oriented methods, good contrast and artifact-free results can be achieved simultaneously. Although regional contrast enhancement methods can obtain richer details as expected, the noises accompanied with the images are enhanced as well, in particular those homogeneous regions. To solve this issue, the contrast limited adaptive histogram equalization (CLAHE) is proposed. The method utilizes the AHE structure with restricted slope of the transformation function for the reduction of noises. Yet, massive computational complexity is its major deficiency. To cope with this, a method termed integral CLAHE (ICLAHE) is proposed to specifically address this issue. In this method, the concept of integral image and the property during pdf clipping are both exploited for less computations from the original O(M^2×P^2) to O((L+1)×P^2) for images of size P×Pand contextual region size M×M. Compared with state-of-the-arts regional contrast enhancement methods, the proposed method is not merely the simplest method of providing less halo effect and noises, but offering richer distinguishable textual details. As a result, a great potential of the proposed method on medical imaging is demonstrated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Syue, Jin-Yu, e 薛晉宇. "An Efficient Fusion-Based Contrast Limited Histogram Equalization Defogging". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/n32w75.

Texto completo da fonte
Resumo:
碩士
國立臺灣科技大學
電機工程系
104
Image quality degradation is often introduced by capturing in poor weather conditions such as fog or haze. To overcome this problem, the conventional approaches focus mainly on the enhancement of the overall image contrast. However, because of the unspecified light-source distribution or unsuitable mathematical constraints of the cost functions, quality results are often difficult to achieve. In this thesis, a fusion-based transmission estimation method is introduced to adaptively combine two different transmission models. Specifically, the new fusion weighting scheme and the atmospheric light computed from the Gaussian-based dark channel method improves the estimation of the locations of the light sources. To reduce the flickering effect introduced during the process of frame-based dehazing, a flicker-free module is formulated to alleviate the impacts. The system assessments show this approach is capable of superior defogging and dehazing performance, compared to the state-of-the-art methods, both quantitatively and qualitatively. However, due to the inner constraints of the optical-based defogging, the local image details are usually sacrificed and therefore degrade the practicability. In this thesis, we also proposed another solution to solve this issue. The traditional image enhancement method, contrast limited adaptive histogram equalization (CLAHE), is further exploited by reducing its computational complexity, and then combined with the optical-based defogging method to enhance the image detail while preserving the color fidelity. To solve with the over bright and low contrast issue resulted from the unsuitable block size, an adaptive refinement module based two brightness channels is also proposed. The quantitative and qualitative system assessment shows that the proposed approach achieves a superior defogging performance, and maintains the image naturalness effectively compared to the state-of-art methods, making it the best candidate for various applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Lin, Jia-Hwa, e 林佳華. "Edge Preserving for Contrast Enhancement Based on Histogram Equalization". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/37428206236087680201.

Texto completo da fonte
Resumo:
碩士
國立臺灣科技大學
資訊工程系
98
This thesis provides a new method that preserves edge with contrast enhancement for color images. It is based on edge-based histogram method with in different from traditional histogram-based equalizations. Our method not only has good contrast enhancement effect but also avoids artifact. Fist, we use Sobel operator to detect edges and be use them to update the histogram. Base on updated histogram, the new transformation function is generated. Experimental results show that the proposed method is better than past methods in image quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Hsieh, Wen-lung, e 謝文龍. "Study of global contrast enhancement by adaptive histogram equalization". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/64296881409979898418.

Texto completo da fonte
Resumo:
碩士
雲林科技大學
電機工程系碩士班
98
HDR image of the formation of two approaches, one relying on pieces of the same image with different exposure and then re-capture the visual details of the composition of a single image; Second, contrast is used to expand a single image and then compressed into a high dynamic contrast of the image . Available in only a single high dynamic range images, how to make low-contrast display can honestly show their beautiful natural scenes? In general there are two methods for using a simple contrast change quickly get results, but may lose the bright part or shadow detail; Second, we use the dark part of the Department of Imaging bright layer technology to improve the use of Gaussian filters, the details Although can present, but its slow, heavy and generally paint a sense of visual experience fit. This paper we propose a scalable and compressed the image contrast of the method, in RGB color model, using the control image divided by the coefficient between the global image brightness can change the purpose, in the supplemented by adaptive histogram equalization technique to improve LDR & HDR image. LDR image can be divided into the more dark images and general images, we selected the general image contrast amplification factor, and then another set of coefficients selected so that dark images become invisible acceptable visual images without having to delete. HDR image can be divided into three categories, we were also selected most of the coefficients and set rules so that the face for processing images, can be easily visualized. The proposed methodology is straightforward, and in the experiment, compared with some traditional methods to improve the outcome after, could easily have found that the proposed method can generally get a fine image and HDR image detail, contrast, and consistent visual performance feeling images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Lo, Yi-Chung, e 羅一中. "Low Cost FPGA Circuit Design of Modified Histogram Equalization". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/h8k752.

Texto completo da fonte
Resumo:
碩士
國立臺北科技大學
電機工程系所
93
For real-time moving picture contrast enhancement, the existing methods usually require one or more frame buffers to store the intermediate output, which is expensive to implement on a practical hardware system. Thus, this thesis proposes a modified histogram equalization (MHE) algorithm combined with a backward frame translation table to exclude any need of frame buffers. Conventional histogram equalization is widely adopted in contrast enhancement because of its automation in generating the transformation curve. It transforms the image based on the cumulative distribution function of histogram. However, there is a caveat to over-enhance the contrast when the auto-generated transform curve is too sharp due to some high peaks in the histogram. In this case, conventional histogram equalization may result in a harsh, noisy appearance of the output image. Our proposed MHE algorithm employs the mean and standard variation to pickup the high peaks in the histogram and modify histogram before making the translation table. Then the translation table was applied by a low pass filter to smooth on the contiguous values. This algorithm does not only avoid over-enhancement and increase dynamic range of the grey level, but also translates low spatial frequency area smoothly. The proposed algorithm was successfully implemented in an FPGA platform to demonstrate its effectiveness.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Chan, Ai-Ju, e 詹璦如. "Automatic Equal-Separated Histogram Equalization for High-Quality Contrast Enhancement". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/4c5mx6.

Texto completo da fonte
Resumo:
碩士
國立臺灣科技大學
電子工程系
99
Histogram equalization is an effective technique for contrast enhancement. However, the traditional histogram equalization (HE) method usually results in extreme over-enhancement, which causes the unnatural look and visual artifacts in the processed image. In this thesis, we propose a novel histogram equalization method based on the automatic histogram separation along with the piecewise transformation function. The five enhanced methods including HE, BBHE, DSIHE, RSIHE, and the proposed method are implemented by C language for comparison. We firstly analyse the qualitative and quantitative evaluation to prove our approach is efficient. Afterwards, the power consumption is estimated by using Wattch toolset. Experimental results show that the proposed Automatic Equal-Separated Histogram Equalization (AESHE) not only keeps the shape features of the original histogram but also enhances the contrast effectively even though the processing time and the power consumption have little higher than other methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Ting-ChouTsai e 蔡定洲. "A Weight-Based Contrast Enhancement Algorithm by Clustered Histogram Equalization". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/58774256714086442447.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Chen, Shin-Anne, e 陳信安. "Exposure-based Weighted Dynamic Histogram Equalization for Image Contrast Enhancement". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/8rpdc8.

Texto completo da fonte
Resumo:
碩士
國立臺北科技大學
自動化科技研究所
103
Global histogram equalization (GHE) [1] is a common method used for improving image contrast. However, this technique tends to introduce unnecessary visual artifacts and cannot preserve overall brightness. To overcome these problems, many studies have been conducted based on partitioned-histogram (i.e., sub-histogram) equalization. An input image is first divided into sub-images, individual histograms of the sub-images are equalized independently, and all of the sub-images are ultimately integrated into one complete image. For example, exposure-based sub-image histogram equalization (ESIHE) [2] uses an exposure-related threshold to divide the original image into different intensity ranges (horizontal partitioning) and also uses the mean brightness as a threshold to clip the histogram (vertical partitioning). In this paper, a novel method, called exposure-based weighted dynamic histogram equalization (EWDHE), which is an extension of ESIHE, is proposed. This study makes three major contributions to the literature. First, an Otsu-based approach and a clustering performance measure are integrated to determine the optimal number of sub-histograms and the separating points. Second, an exposure-related parameter is used to automatically adapt the contrast limitation, to avoid over-enhancement in some portions of the image. Third, a new weighted scale factor is proposed to resize the sub-histograms, which accounts for the sub-histogram ranges and individual pixel numbers of these ranges. The simulation results indicated that the proposed method outperformed state-of-the-art approaches in terms of contrast enhancement, brightness preservation, and entropy preservation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Wang, Tsan-Wei, e 王讚緯. "A Voice Conversion System Using Histogram Equalization and Target Frame Selection". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/n6u2b9.

Texto completo da fonte
Resumo:
碩士
國立臺灣科技大學
資訊工程系
102
Is this thesis, linear multivariate regression (LMR) is adopted for spectrum mapping. In addition, histogram equalization (HEQ) of spectral coefficients and target frame selection (TFS) are included to our system. We intend to solve the problem of spectral over-smoothing encountered by the conventional GMM (Gaussian mixture model) based mapping mechanism in order to improve the converted voice quality. Also, we notice that parallel training sentences are hard to prepare. Therefore we study a method to construct an imitative parallel corpus from a nonparallel corpus. Next, we use a nonparallel corpus to build four voice conversion systems: LMR, LMR+TFS, HEQ+LMR and HEQ+TFS. In the training stage, the method, segment-based frame alignment, is refined to construct the imitative parallel corpus. Then, the corpus is used to train the model parameters for the four voice conversion systems respectively. In the module for HEQ, discrete cepstral coefficients (DCC) are first transformed to principle-component-analysis (PCA) coefficients, and then transformed to cumulative-density-function (CDF) coefficient. In the module for TFS, a DCC vector obtained from LMR mapping and its segment-class number are used to search the corresponding frame set consisting of target-speaker frames belonging to the same segment class. Then, the DCC vector of a frame in the frame set that is nearest to the LMR mapped DCC vector is found and taken to replace the mapped DCC vector. In the conversion stage, it is seen that the HEQ module can decreases the average DCC error, but the TFS module causes the average DCC error being increased. However, the TFS module can really improve the converted voice quality according to the measure of variance ratio. Therefore, the increased average DCC error does not indicate the converted voice quality is worsened.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Wu, Szu-Wei, e 吳思蔚. "Oriented Local Histogram Equalization Features and Its Application to Face Recognition". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/30115814217570120687.

Texto completo da fonte
Resumo:
碩士
國立臺灣大學
資訊工程學研究所
98
In this paper, we propose a novel image preprocessing method which enhances local oriented contrast of facial images by using oriented local histogram equalization (OLHE), and apply it on face recognition. This method preserves local oriented information by performing local histogram equalization (LHE) with asymmetric kernels. In order to extract the feature on a facial image, we concatenate results which are processed by using eight different orientations of OLHE, called 8-oriented OLHE feature. We expect that the result of face recognitions will be better, because the feature contains local information and orientations, and our inference is proved by the experimental results. The key advantages of the method are its less computational complexity, invariance on illumination changes and can be integrated easily with other face recognition algorithms. We demonstrate the integrations of OLHE with Sparse Representation-based Classification (SRC) which is a holistic face recognition algorithm, and Facial Trait Code (FTC) which is part-based face recognition algorithm. Furthermore, we propose Sparse Representation Facial Trait Code (SRFTC) which is an integration of the FTC and SRC. This novel method combines advantages of these two algorithms and decreases the influence of shorts effectively. Based on the experiments on AR database, we obtain 99.3% recognition rate on holistic face recognition algorithm and 99.8% recognition rate on part-based face recognition algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Wang, Chao-Hsin, e 王肇薪. "Novel Mean-Shift based Histogram Equalization by using Dynamic Range Suppression". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/74827709482354308199.

Texto completo da fonte
Resumo:
碩士
國立臺灣科技大學
自動化及控制研究所
98
This paper presents a novel mean-shift based histogram equalization method called the MSHE method. The key idea of the proposed MSHE method is to cluster the pixels on the non-smooth area of the image by using mean-shift algorithm and suppress the dynamic range of the histogram which is composed of the clustered pixels, and then to perform histogram equalization. Further, a contrast enhancement assessment is presented to compare the contrast effect between our method and the other six methods, which are HE, BBHE, DSIHE, RSWHE, SRHE, and GA. Based on three typical test images, experimental results indicate that our proposed MSHE method outperforms the six existing contrast enhancement methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Lin, Ping-Hsien, e 林秉賢. "Contrast Enhancement for Digital Color Images Using Variants of Histogram Equalization". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/44473391439777502972.

Texto completo da fonte
Resumo:
碩士
國立臺灣大學
電機工程學研究所
97
With the prevalence of digital photographing nowadays, more and more consumer electronic devices are installed with photo-shooting functionalities. Most equipment, somehow, is not intended for professional use of photographing, and hence components for this purpose are not delicate enough under economical considerations. This produces pictures that are not fairly acceptable under some extreme shooting conditions, like low-contrasting images, and has to rely on post-processing techniques to improve the quality of these images. In this thesis, we propose two primary methods, Iterative Sub-Histogram Equalization (ISHE) and Statistic-Separate Tri-Histogram Equalization (SSTHE), for contrast enhancement on color images with brightness preservation, and a secondary post-enhancement technique, Gaussian Distributive Filter (GDF), to directly improve contrasts from a micro aspect and reduce brightness quantization of the output histogram from former methods. ISHE generates a high-contrasting image and preserves brightness to some level by iteratively utilizing the BBHE method. SSTHE segments the original histogram into three regions according to the mean and standard deviation of the image brightness, re-ranges spans of each sub-histogram and executes histogram equalization within each scope respectively. GDF locates and disperses over-concentrated values in the histogram with the Gaussian distributive pattern. Since the histogram calculation has already been maturely implemented in hardware, the methods proposed in the thesis could be readily applied on still color images because of their simplicity, as well as low computation requirements make them suitable for consumer electronics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Huang, Kai-hsiang, e 黃愷翔. "FPGA Implementation Of Histogram Equalization Based Real Time Video Image Processing". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/80500074206894109154.

Texto completo da fonte
Resumo:
碩士
義守大學
電子工程學系碩士班
97
In this dissertation, a video image processing systems was implemented on an Altera ED2-70 FPGA development board. The video images were captured by a 5 million pixel CMOS digital camera. The first stage of the vision system is the image acquisition stage. The image is acquired and then stored in SDRAM. After the image has been obtained, various methods of processing can be applied to the image to perform the many different vision tasks such as histogram equalization for image enhancement. The image data after processing are output to the LCD touch screen through the LCD touch screen controller. The video system contains five modules, namely image acquisition module (CMOS Sensor Data Capture), image data format conversion and sampling module (Bayer Color Pattern Data To 30-Bit RGB), SDRAM controller module (Multi-Port SDRAM Controller), image processing module (Image process), and LCD touch screen controller (LTM Controller And Data Request). The image capture module is to convert a two-dimensional image to a one-dimension electrical signal that can be handled by the computer. The image data format conversion and the sampling function modules is used for transforming the image data into the the 10bit RGB format. The SDRAM controller module is used to control the function of image data in the access to SDRAM. Image processing module is used for image processing algorithms such as Histogram equalization and the smoothing filter (Average) algorithm. The LCD touch-screen controller function is used for image data output to the LCD touch-screen. The proposed algorithms are applied for the gray-scale images and color images processing. The proposed image enhanced method is simulated by Altera Quartus II. software tool. The verified codes are downloaded on FPGA for hardware verification. The results indicate our proposed video system can obtain a better image quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Cheng, Yao-Ren, e 鄭堯仁. "A Study on Optimized Histogram Equalization Methods for Hand Radiograph Segmentation Scheme". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/n67848.

Texto completo da fonte
Resumo:
碩士
國立中興大學
資訊科學與工程學系所
101
Bone age assessment (BAA) deals with a left hand radiograph for analyzing the growth status of hand bones and estimating the biometric age. The information can be used to determine whether the biometric age and the actual age are inconsistent. BAA can be not only used to detect the growth retardation but also applied for understanding growth potential of children. To analyze the growth status of metacarpal bones, one should first segment the bone area from the rest of the image and then extract the features. Because this estimation is for children, some problems related to the metacarpal bone radiograph, such as over tilted positioning of the palm or low illumination contrast, may make EMROI segmentation more difficult. The above-mentioned problems can be avoided through standardized image processing and procedure, and consequently, BAA accuracy can be enhanced. Yet, the low illumination problem could further decrease the contrast between bones and muscle tissues, reducing the accuracy of EMROI segmentation. This issue can hinder the obtaining of epiphysis and bone age evaluation. Therefore, before segmenting the bone area, an image enhancement approach suitable for the characteristics of radiograph is usually used to improve image quality and thus resolve the above-mentioned problems. Although the conventional enhancement approaches such as histogram is excellent in enhancing contrast of assorted images, for radiograph, these approaches may lead to other problems, such as overexposure and wiping out of details. In relevant research, optimized histogram equalization is the most commonly used approach for illumination contrast improvement. In our method, first used morphology to pre-treat EMROI, and then conducted experiments to find out the threshold values and other relevant parameters suitable for the distal、 middle and proximal phalanges. Afterward, optimized histogram equalization based on the above-mentioned values was applied to enhance the related region of interest. In the experiment, the investigators compared segmentation results between with and without the use of adaptive optimized histogram equalization for image enhancement. Four indexes: accuracy, extraction error rate (ERR), and sensitivity are used for comparison evaluation. It can be found from the experiment results that the image enhancement approach presented in this study can bring better segmentation results, as supported by all data, than those obtained without image enhancement. Therefore, the presented adaptive optimized histogram equalization can enhance the segmentation accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Lai, I. Ju, e 賴薏如. "An Image Enhancement Algorithm Using Histogram Equalization and Content-Aware Image Segmentation". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/95158538842882740904.

Texto completo da fonte
Resumo:
碩士
長庚大學
資訊工程學系
98
Illumination is the fundamental of visibility and thus also an important factor to human vision of images. Adjusting illumination to enhance an image is of large practical value because the quality of images is easy to be decreased by unideal light source. Histogram equalization has been frequently used to adjust illumination and verified to improve the overall quality of the target image. However, this approach may also squash available luminance range in a particular local area, and thus dull the image contents in this area. Researchers have introduced matrix-partition based local histogram equalization that equalizes sub images block by block. Although this method enhances the quality of individual sub images, the edges between blocks bring up “chessboard effect” that affects the vision of the whole images. In this thesis, we propose an image segmentation method based on energy analysis. Using edge detection operation we can figure out the gradient and derive the energy of each pixel. Analyzing the energy distribution of the image, we draw the segmentation lines by connecting pixels with maximum energy values. Consequently, the image segmentation is not of matrix shape but with flexible partitions, on which we conduct local histogram equalization. Image segmentation like this is highly relevant to the image contents; therefore, the edges between partitions will not reveal themselves after illumination adjusted. The proposed method can both maintain image vision and improve overall image quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Tsai, Shang-nien, e 蔡尚年. "Robust Speech Feature Front-End Processing Techniques Based on Progressive Histogram Equalization". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/70500526488219371940.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Chen, Ying-Kang, e 陳映綱. "Color Image Enhancement Using Luminance Histogram Equalization and Tow-Factor Saturation Control". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/59384384736025534597.

Texto completo da fonte
Resumo:
碩士
中原大學
通訊工程碩士學位學程
103
Contrast is the key point of visual effects. Generally speaking, a greater contrast gives conspicuous images with plentiful color, while a lower contrast results in gray like images. The brightness and contrast are highly dependent. In general, an image with uniform brightness distribution would show a great deal of gray level detail and high dynamic range. Currently on the market and the Internet, many image editing software packages provide automatic image enhancement functions for public use. With this kind of software, just one simple operation can improve the visual quality of the image. However, after a real test we found that, under certain conditions or special circumstances, part of the image enhancement functions does not work effectively, and there is a great chance of causing color problems. In other words, even after an image has been enhanced in brightness, it often induces the problems in hue shifting and poor saturation. Therefore, the idea of the proposed method is to control the change in hue to a minimum and avoid the change of color attributes. We use histogram equalization to increase the dynamic range of luminance and render more options for saturation improvement based on the luminance change using our proposed technique called two-factor reconstruction. Because the psychovisual sense of color cannot be quantified, the proposed method provides an adjustable parameter for users to meet their color satisfaction. For convenience, the input image will be classified into several categories, and the parameter setting guideline for the category of that image will be provided to the users so that they can adjust the parameter to achieve the desired saturation of the output image. The experimental results show that the proposed method successfully improves the performance of image brightness and contrast under the condition of preserving hue information, and simultaneously improve human visual experience by enhancing the saturation. It has a relatively good performance compared with others methods, and the statistics on psychological assessment also shows that using the parameter to adjust the image saturation can fully meet the needs of the users.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Lin, Cheng-Feng, e 林成峯. "Landslide Detection with Multi-Dimensional Histogram Equalization for Multispectral Remotely Sensed Imagery". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/49425200590105182433.

Texto completo da fonte
Resumo:
碩士
國立中央大學
太空科學研究所
101
Taiwan is located at Circum-Pacific seismic zone; therefore there are a lot of earthquakes in this region. Besides, in this subtropical region, there are usually several typhoons pass through each year. These two natural phenomena may cause serious landslides in the mountainous regions. For landslides hazard assessment, change detection with remote sensing images is an efficient and effective approach. Change detection is one of the most important applications of remote sensing technique, and it provides useful information for various applications, including disaster monitoring, urban development and agriculture management. Compare two images collected at different time from same located, the ground surface change can be detected. However, the difference in spectrum may not solely result from the changes on the ground. The spectrum of the same material in two remote sensing images may not be the same due to the different condition of solar illumination and atmosphere condition while the images were obtained. Therefore, radiometric calibration is required before applying the change detection algorithm and comparing the spectrum. In this study, we propose a multi-dimensional histogram equalization algorithm as a pre-process step for relative calibration. It modifies multispectral images collected under different atmospheric conditions to have similar spectrum for the same land cover. A set of SPOT images is adopted for experiments and results show the proposed method can reduce the misclassification rate.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Liu, Wen-bin, e 劉文彬. "A Fast Approach for Enhancing Sequence of Color Images Using Dichotomy Histogram Equalization". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/02770440886371219731.

Texto completo da fonte
Resumo:
碩士
元智大學
資訊工程學系
90
This study presents a novel approach for enhancing sequences of color images, using the technique of the dichotomy histogram equalization. Each color pixel in the RGB color space is first transformed to the YCbCr color space. A mapping table is next created, and the techniques of the histogram projection as well as the dichotomy histogram equalization are adopted. After replacing the Y component of the input image, the Cb and Cr components are adjusted according to the color area in YCbCr color space. Finally, the result is transformed back to the RGB color space. Experimental results will reveal the practicability of the proposed methods
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Hilger, Florian Erich [Verfasser]. "Quantile based histogram equalization for noise robust speech recognition / von Florian Erich Hilger". 2004. http://d-nb.info/974461431/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Li, Wei-Jia, e 李尉嘉. "Enhancing Low-exposure Images Based on Modified Histogram Equalization and Local Contrast Adaptive Enhancement". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/03340322721691697800.

Texto completo da fonte
Resumo:
碩士
國立中興大學
資訊科學與工程學系
104
Image enhancement methods can effectively improve the visual contents of images, provide us with the better visual experience, and make the computer work more efficiently on images. Therefore, enhanced images tend to be more suitable than original images from the perspective of a particular application. Two common drawbacks usually exist in traditional image enhancement methods: one is over-enhancement and the other is loss of details. In this thesis, we propose an adaptive method to enhance the illumination of color images. The method consists of two steps for performing image enhancement. The first step is to use adjust the content of the image based on image histogram to decrease non-natural points and avoid the situation of over-brightness. The second step applies adaptive local contrast enhancement algorithm to reduce the loss of details. Experimental results show that the brightness and contrast of low-exposure images can be effectively improved by our method. As compared with other methods, our method has better performance in terms of objective measurements such as Contrast, Entropy, Gradient andAbsolute Mean Brightness Error (AMBE).
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

CHEN, ZHI-FAN, e 陳志凡. "An Image Enhancement Method Based on Bilateral Filtering and Contrast Limited Adaptive Histogram Equalization". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/77m8t9.

Texto completo da fonte
Resumo:
碩士
國立中正大學
資訊管理系研究所
104
At present, digital photography technology can’t be precisely presented as the scene seen by the human eye since the display device is typically low dynamic range rather than high dynamic range. In other words, the devices are often unable to display the details of shadows and highlights at the same time for high contrast images. If a normal image enhancement method is used to enhance these images, it may result in uneven distribution of image brightness, color distortion or loss of image detail information. Therefore, this study proposes a method to resolve these problems. Starting with the use of the bilateral filter to retain image details, then automatically give the optimum operation parameters through contrast limited adaptive histogram equalization to make appropriate contrast adjustment to the base layer image, so the display of the device can be more similar to the visual quality of the high dynamic range. In the experiments, in comparison with other state-of-art methods, we find that the proposed method is superior to other methods whether in detail information, retention of hue or brightness enhancement. In addition, there is better performance in the objective mathematical evaluation indexes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia