To see the other types of publications on this topic, follow the link: Thresholding.

Dissertations / Theses on the topic 'Thresholding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Thresholding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pavlicova, Martina. "Thresholding FMRI images." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1097769474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pavlicová, Martina. "Thresholding FMRI images." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1097769474.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xvii, 109 p.; also includes graphics (some col.) Includes bibliographical references (p. 107-109). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
3

Prakash, Aravind. "Confidential Data Dispersion using Thresholding." Scholarly Repository, 2009. http://scholarlyrepository.miami.edu/oa_theses/232.

Full text
Abstract:
With growing trend in "cloud computing" and increase in the data moving into the Internet, the need to store large amounts of data by service providers such as Google, Yahoo and Microsoft has increased over time. Now, more than ever, there is a need to efficiently and securely store large amounts of data. This thesis presents an implementation of a Ramp Scheme that confidentially splits a data file into a configurable number of parts or shares of equal size such that a subset of those shares can recover the data entirely. Furthermore, the implementation supports a threshold for data compromise and data verification to verify that the data parts have not been tampered with. This thesis addresses two key problems faced in large-scale data storage, namely, data availability and confidentiality.
APA, Harvard, Vancouver, ISO, and other styles
4

Granlund, Oskar, and Kai Böhrnsen. "Improving character recognition by thresholding natural images." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208899.

Full text
Abstract:
The current state of the art optical character recognition (OCR) algorithms are capable of extracting text from images in predefined conditions. OCR is extremely reliable for interpreting machine-written text with minimal distortions, but images taken in a natural scene are still challenging. In recent years the topic of improving recognition rates in natural images has gained interest because more powerful handheld devices are used. The main problem faced dealing with recognition in natural images are distortions like illuminations, font textures, and complex backgrounds. Different preprocessing approaches to separate text from its background have been researched lately. In our study, we assess the improvement reached by two of these preprocessing methods called k-means and Otsu by comparing their results from an OCR algorithm. The study showed that the preprocessing made some improvement on special occasions, but overall gained worse accuracy compared to the unaltered images.
Dagens optisk teckeninläsnings (OCR) algoritmer är kapabla av att extrahera text från bilder inom fördefinierade förhållanden. De moderna metoderna har uppnått en hög träffsäkerhet för maskinskriven text med minimala förvrängningar, men bilder tagna i en naturlig scen är fortfarande svåra att hantera. De senaste åren har ett stort intresse för att förbättra tecken igenkännings algoritmerna uppstått, eftersom fler kraftfulla och handhållna enheter används. Det huvudsakliga problemet när det kommer till igenkänning i naturliga bilder är olika förvrängningar som infallande ljus, textens textur och komplicerade bakgrunder. Olika metoder för förbehandling och därmed separation av texten och dess bakgrund har studerats under den senaste tiden. I våran studie bedömer vi förbättringen som uppnås vid förbehandlingen med två metoder som kallas för k-means och Otsu genom att jämföra svaren från en OCR algoritm. Studien visar att Otsu och k-means kan förbättra träffsäkerheten i vissa förhållanden men generellt sett ger det ett sämre resultat än de oförändrade bilderna.
APA, Harvard, Vancouver, ISO, and other styles
5

Kovac, Arne. "Wavelet thresholding for unequally time-spaced data." Thesis, University of Bristol, 1999. http://hdl.handle.net/1983/2088715a-7792-4032-bb76-83e3b0389b94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Buck, Jonathan Gordon. "IMPROVED THRESHOLDING TECHNIQUE FOR THE MONOBIT RECEIVER." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1183477928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Katakam, Nikhil. "Pavement crack detection system through localized thresholding /." Connect to full text in OhioLINK ETD Center, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=toledo1260820344.

Full text
Abstract:
Thesis (M.S.)--University of Toledo, 2009.
Typescript. "Submitted as partial fulfillment of the requirements for The Master of Science in Engineering." "A thesis entitled"--at head of title. Bibliography: leaves 65-68.
APA, Harvard, Vancouver, ISO, and other styles
8

Hertz, Lois. "Robust image thresholding techniques for automated scene analysis." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/15050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Benson, Stephen R. "Adaptive Thresholding for Detection of Radar Receiver Signals." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1287692780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Feiran. "High Frequency Resolution Adaptive Thresholding Wideband Receiver System." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1451042587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fernando, Gerard Marius Xavier. "Variable thresholding of images with application to ventricular angiograms." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Amecke, Nicole, André Heber, and Frank Cichos. "Distortion of power law blinking with binning and thresholding." AIP Publishing, 2014. https://ul.qucosa.de/id/qucosa%3A21262.

Full text
Abstract:
Fluorescence intermittency is a random switching between emitting (on) and non-emitting (off) periods found for many single chromophores such as semiconductor quantum dots and organic molecules. The statistics of the duration of on- and off-periods are commonly determined by thresholding the emission time trace of a single chromophore and appear to be power law distributed. Here we test with the help of simulations if the experimentally determined power law distributions can actually reflect the underlying statistics. We find that with the experimentally limited time resolution real power law statistics with exponents αon/off ≳ 1.6, especially if αon ≠ αoff would not be observed as such in the experimental data after binning and thresholding. Instead, a power law appearance could simply be obtained from the continuous distribution of intermediate intensity levels. This challenges much of the obtained data and the models describing the so-called power law blinking.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Mansuo. "Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization." University of Sydney. School of Electrical and Information Engineering, 2005. http://hdl.handle.net/2123/699.

Full text
Abstract:
Thresholding is a commonly used technique in image segmentation because of its fast and easy application. For this reason threshold selection is an important issue. There are two general approaches to threshold selection. One approach is based on the histogram of the image while the other is based on the gray scale information located in the local small areas. The histogram of an image contains some statistical data of the grayscale or color ingredients. In this thesis, an adaptive logical thresholding method is proposed for the binarization of blueprint images first. The new method exploits the geometric features of blueprint images. This is implemented by utilizing a robust windows operation, which is based on the assumption that the objects have "e;C"e; shape in a small area. We make use of multiple window sizes in the windows operation. This not only reduces computation time but also separates effectively thin lines from wide lines. Our method can automatically determine the threshold of images. Experiments show that our method is effective for blueprint images and achieves good results over a wide range of images. Second, the fuzzy set theory, along with probability partition and maximum entropy theory, is explored to compute the threshold based on the histogram of the image. Fuzzy set theory has been widely used in many fields where the ambiguous phenomena exist since it was proposed by Zadeh in 1965. And many thresholding methods have also been developed by using this theory. The concept we are using here is called fuzzy partition. Fuzzy partition means that a histogram is parted into several groups by some fuzzy sets which represent the fuzzy membership of each group because our method is based on histogram of the image . Probability partition is associated with fuzzy partition. The probability distribution of each group is derived from the fuzzy partition. Entropy which originates from thermodynamic theory is introduced into communications theory as a commonly used criteria to measure the information transmitted through a channel. It is adopted by image processing as a measurement of the information contained in the processed images. Thus it is applied in our method as a criterion for selecting the optimal fuzzy sets which partition the histogram. To find the threshold, the histogram of the image is partitioned by fuzzy sets which satisfy a certain entropy restriction. The search for the best possible fuzzy sets becomes an important issue. There is no efficient method for the searching procedure. Therefore, expansion to multiple level thresholding with fuzzy partition becomes extremely time consuming or even impossible. In this thesis, the relationship between a probability partition (PP) and a fuzzy C-partition (FP) is studied. This relationship and the entropy approach are used to derive a thresholding technique to select the optimal fuzzy C-partition. The measure of the selection quality is the entropy function defined by the PP and FP. A necessary condition of the entropy function arriving at a maximum is derived. Based on this condition, an efficient search procedure for two-level thresholding is derived, which makes the search so efficient that extension to multilevel thresholding becomes possible. A novel fuzzy membership function is proposed in three-level thresholding which produces a better result because a new relationship among the fuzzy membership functions is presented. This new relationship gives more flexibility in the search for the optimal fuzzy sets, although it also increases the complication in the search for the fuzzy sets in multi-level thresholding. This complication is solved by a new method called the "e;Onion-Peeling"e; method. Because the relationship between the fuzzy membership functions is so complicated it is impossible to obtain the membership functions all at once. The search procedure is decomposed into several layers of three-level partitions except for the last layer which may be a two-level one. So the big problem is simplified to three-level partitions such that we can obtain the two outmost membership functions without worrying too much about the complicated intersections among the membership functions. The method is further revised for images with a dominant area of background or an object which affects the appearance of the histogram of the image. The histogram is the basis of our method as well as of many other methods. A "e;bad"e; shape of the histogram will result in a bad thresholded image. A quadtree scheme is adopted to decompose the image into homogeneous areas and heterogeneous areas. And a multi-resolution thresholding method based on quadtree and fuzzy partition is then devised to deal with these images. Extension of fuzzy partition methods to color images is also examined. An adaptive thresholding method for color images based on fuzzy partition is proposed which can determine the number of thresholding levels automatically. This thesis concludes that the "e;C"e; shape assumption and varying sizes of windows for windows operation contribute to a better segmentation of the blueprint images. The efficient search procedure for the optimal fuzzy sets in the fuzzy-2 partition of the histogram of the image accelerates the process so much that it enables the extension of it to multilevel thresholding. In three-level fuzzy partition the new relationship presentation among the three fuzzy membership functions makes more sense than the conventional assumption and, as a result, performs better. A novel method, the "e;Onion-Peeling"e; method, is devised for dealing with the complexity at the intersection among the multiple membership functions in the multilevel fuzzy partition. It decomposes the multilevel partition into the fuzzy-3 partitions and the fuzzy-2 partitions by transposing the partition space in the histogram. Thus it is efficient in multilevel thresholding. A multi-resolution method which applies the quadtree scheme to distinguish the heterogeneous areas from the homogeneous areas is designed for the images with large homogeneous areas which usually distorts the histogram of the image. The new histogram based on only the heterogeneous area is adopted for partition and outperforms the old one. While validity checks filter out the fragmented points which are only a small portion of the whole image. Thus it gives good thresholded images for human face images.
APA, Harvard, Vancouver, ISO, and other styles
14

Kieri, Andreas. "Context Dependent Thresholding and Filter Selection for Optical Character Recognition." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-197460.

Full text
Abstract:
Thresholding algorithms and filters are of great importance when utilizing OCR to extract information from text documents such as invoices. Invoice documents vary greatly and since the performance of image processing methods when applied to those documents will vary accordingly, selecting appropriate methods is critical if a high recognition rate is to be obtained. This paper aims to determine if a document recognition system that automatically selects optimal processing methods, based on the characteristics of input images, will yield a higher recognition rate than what can be achieved by a manual choice. Such a recognition system, including a learning framework for selecting optimal thresholding algorithms and filters, was developed and evaluated. It was established that an automatic selection will ensure a high recognition rate when applied to a set of arbitrary invoice images by successfully adapting and avoiding the methods that yield poor recognition rates.
APA, Harvard, Vancouver, ISO, and other styles
15

Hytla, Patrick C. "Multi-Ratio Fusion Change Detection Framework with Adaptive Statistical Thresholding." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1461322397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Mansuo. "Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/699.

Full text
Abstract:
Thresholding is a commonly used technique in image segmentation because of its fast and easy application. For this reason threshold selection is an important issue. There are two general approaches to threshold selection. One approach is based on the histogram of the image while the other is based on the gray scale information located in the local small areas. The histogram of an image contains some statistical data of the grayscale or color ingredients. In this thesis, an adaptive logical thresholding method is proposed for the binarization of blueprint images first. The new method exploits the geometric features of blueprint images. This is implemented by utilizing a robust windows operation, which is based on the assumption that the objects have "e;C"e; shape in a small area. We make use of multiple window sizes in the windows operation. This not only reduces computation time but also separates effectively thin lines from wide lines. Our method can automatically determine the threshold of images. Experiments show that our method is effective for blueprint images and achieves good results over a wide range of images. Second, the fuzzy set theory, along with probability partition and maximum entropy theory, is explored to compute the threshold based on the histogram of the image. Fuzzy set theory has been widely used in many fields where the ambiguous phenomena exist since it was proposed by Zadeh in 1965. And many thresholding methods have also been developed by using this theory. The concept we are using here is called fuzzy partition. Fuzzy partition means that a histogram is parted into several groups by some fuzzy sets which represent the fuzzy membership of each group because our method is based on histogram of the image . Probability partition is associated with fuzzy partition. The probability distribution of each group is derived from the fuzzy partition. Entropy which originates from thermodynamic theory is introduced into communications theory as a commonly used criteria to measure the information transmitted through a channel. It is adopted by image processing as a measurement of the information contained in the processed images. Thus it is applied in our method as a criterion for selecting the optimal fuzzy sets which partition the histogram. To find the threshold, the histogram of the image is partitioned by fuzzy sets which satisfy a certain entropy restriction. The search for the best possible fuzzy sets becomes an important issue. There is no efficient method for the searching procedure. Therefore, expansion to multiple level thresholding with fuzzy partition becomes extremely time consuming or even impossible. In this thesis, the relationship between a probability partition (PP) and a fuzzy C-partition (FP) is studied. This relationship and the entropy approach are used to derive a thresholding technique to select the optimal fuzzy C-partition. The measure of the selection quality is the entropy function defined by the PP and FP. A necessary condition of the entropy function arriving at a maximum is derived. Based on this condition, an efficient search procedure for two-level thresholding is derived, which makes the search so efficient that extension to multilevel thresholding becomes possible. A novel fuzzy membership function is proposed in three-level thresholding which produces a better result because a new relationship among the fuzzy membership functions is presented. This new relationship gives more flexibility in the search for the optimal fuzzy sets, although it also increases the complication in the search for the fuzzy sets in multi-level thresholding. This complication is solved by a new method called the "e;Onion-Peeling"e; method. Because the relationship between the fuzzy membership functions is so complicated it is impossible to obtain the membership functions all at once. The search procedure is decomposed into several layers of three-level partitions except for the last layer which may be a two-level one. So the big problem is simplified to three-level partitions such that we can obtain the two outmost membership functions without worrying too much about the complicated intersections among the membership functions. The method is further revised for images with a dominant area of background or an object which affects the appearance of the histogram of the image. The histogram is the basis of our method as well as of many other methods. A "e;bad"e; shape of the histogram will result in a bad thresholded image. A quadtree scheme is adopted to decompose the image into homogeneous areas and heterogeneous areas. And a multi-resolution thresholding method based on quadtree and fuzzy partition is then devised to deal with these images. Extension of fuzzy partition methods to color images is also examined. An adaptive thresholding method for color images based on fuzzy partition is proposed which can determine the number of thresholding levels automatically. This thesis concludes that the "e;C"e; shape assumption and varying sizes of windows for windows operation contribute to a better segmentation of the blueprint images. The efficient search procedure for the optimal fuzzy sets in the fuzzy-2 partition of the histogram of the image accelerates the process so much that it enables the extension of it to multilevel thresholding. In three-level fuzzy partition the new relationship presentation among the three fuzzy membership functions makes more sense than the conventional assumption and, as a result, performs better. A novel method, the "e;Onion-Peeling"e; method, is devised for dealing with the complexity at the intersection among the multiple membership functions in the multilevel fuzzy partition. It decomposes the multilevel partition into the fuzzy-3 partitions and the fuzzy-2 partitions by transposing the partition space in the histogram. Thus it is efficient in multilevel thresholding. A multi-resolution method which applies the quadtree scheme to distinguish the heterogeneous areas from the homogeneous areas is designed for the images with large homogeneous areas which usually distorts the histogram of the image. The new histogram based on only the heterogeneous area is adopted for partition and outperforms the old one. While validity checks filter out the fragmented points which are only a small portion of the whole image. Thus it gives good thresholded images for human face images.
APA, Harvard, Vancouver, ISO, and other styles
17

Camp, Charles Henry. "Patterned active region multimode switches for optical thresholding theory and simulation /." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2721.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
18

Carlsson, Pontus. "Transform Coefficient Thresholding and Lagrangian Optimization for H.264 Video Coding." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2264.

Full text
Abstract:

H.264, also known as MPEG-4 Part 10: Advanced Video Coding, is the latest MPEG standard for video coding. It provides approximately 50% bit rate savings for equivalent perceptual quality compared to any previous standard. In the same fashion as previous MPEG standards, only the bitstream syntax and the decoder are specified. Hence, coding performance is not only determined by the standard itself but also by the implementation of the encoder. In this report we propose two methods for improving the coding performance while remaining fully compliant to the standard.

After transformation and quantization, the transform coefficients are usually entropy coded and embedded in the bitstream. However, some of them might be beneficial to discard if the number of saved bits are sufficiently large. This is usually referred to as coefficient thresholding and is investigated in the scope of H.264 in this report.

Lagrangian optimization for video compression has proven to yield substantial improvements in perceived quality and the H.264 Reference Software has been designed around this concept. When performing Lagrangian optimization, lambda is a crucial parameter that determines the tradeoff between rate and distortion. We propose a new method to select lambda and the quantization parameter for non-reference frames in H.264.

The two methods are shown to achieve significant improvements. When combined, they reduce the bitrate around 12%, while preserving the video quality in terms of average PSNR.

To aid development of H.264, a software tool has been created to visualize the coding process and present statistics. This tool is capable of displaying information such as bit distribution, motion vectors, predicted pictures and motion compensated block sizes.

APA, Harvard, Vancouver, ISO, and other styles
19

Quan, Jin. "Image Denoising of Gaussian and Poisson Noise Based on Wavelet Thresholding." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1380556846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bunn, Wendy J. "Sensitivity to distributional assumptions in estimation of the ODP thresholding function /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1918.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bunn, Wendy Jill. "Sensitivity to Distributional Assumptions in Estimation of the ODP Thresholding Function." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/953.

Full text
Abstract:
Recent technological advances in fields like medicine and genomics have produced high-dimensional data sets and a challenge to correctly interpret experimental results. The Optimal Discovery Procedure (ODP) (Storey 2005) builds on the framework of Neyman-Pearson hypothesis testing to optimally test thousands of hypotheses simultaneously. The method relies on the assumption of normally distributed data; however, many applications of this method will violate this assumption. This thesis investigates the sensitivity of this method to detection of significant but nonnormal data. Overall, estimation of the ODP with the method described in this thesis is satisfactory, except when the nonnormal alternative distribution has high variance and expectation only one standard deviation away from the null distribution.
APA, Harvard, Vancouver, ISO, and other styles
22

Široký, Vít. "Implementace algoritmů zpracování obrazového rastru v FPGA." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237206.

Full text
Abstract:
This thesis is about unusal view of implementation of graphic algorithms in FPGA in computer vision context. There are some informations about raster image and raster image operations, raster image segmentation usign threhsholding and adaptive thresholding and FPGA and DSP platforms. Next, there is a concept of the concrete project realization in the Unicam2D camera and description other ways of implementation. Next, there is a description of implemented tests with some demonstration followed by discussion of ressults in the end of the work.
APA, Harvard, Vancouver, ISO, and other styles
23

Kaur, Ravneet. "THRESHOLDING METHODS FOR LESION SEGMENTATION OF BASAL CELL CARCINOMA IN DERMOSCOPY IMAGES." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1367.

Full text
Abstract:
Purpose: Automatic border detection is the first and most crucial step for lesion segmentation and can be very challenging, due to several lesion characteristics. There are many melanoma border-detecting algorithms that perform poorly on dermoscopy images of basal cell carcinoma (BCC), which is the most common skin cancer. One of the reasons for poor lesion detection performance is that there are very few algorithms that detect BCC borders, because they are difficult to segment, even for dermatologists. This difficulty is due to low contrast, variation in lesion color and artifacts inside/outside the lesion. Segmentation that has adequate lesion-feature capture, with acceptable tolerance, will facilitate accurate feature segmentation, thereby maximizing classification accuracy. Methods: The main objective of this research was to develop an effective BCC border detecting algorithm whose accuracy is better than the existing melanoma border detectors that have been applied to BCCs. Fifteen auto-thresholding techniques were implemented for BCC lesion segmentation; but, only five were selected for use in algorithm development. A novel technique was developed to automatically expand BCC lesion borders, to completely circumscribe the lesion. Two error metrics were used that better measure Type II (false-negative) errors: Relative XOR error and Lesion Capture Ratio (a novel error metric). Results: On training and test sets of 1023 and 119 images, respectively, based on two error metrics, five thresholding-based algorithms outperformed two state-of-the-art melanoma segmentation techniques, in segmenting BCCs. Five algorithms generated borders that appreciably better matched dermatologists’ hand-drawn borders which were used as the “gold standard.” Conclusion: The five developed algorithms, which included solutions for image-vignetting correction and border expansion, to achieve dermatologist-like borders, provided more inclusive and therefore, feature-preserving border detection, favoring better BCC classification accuracy, for future work.
APA, Harvard, Vancouver, ISO, and other styles
24

Pakalapati, Himani Raj. "Programming of Microcontroller and/or FPGA for Wafer-Level Applications - Display Control, Simple Stereo Processing, Simple Image Recognition." Thesis, Linköpings universitet, Elektroniksystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-89795.

Full text
Abstract:
In this work the usage of a WLC (Wafer Level Camera) for ensuring road safety has been presented. A prototype of a WLC along with the Aptina MT9M114 stereoboard has been used for this project. The basic idea is to observe the movements of the driver. By doing so an understanding of whether the driver is concentrating on the road can be achieved. For this project the display of the required scene is captured with a wafer-level camera pair. Using the image pairs stereo processing is performed to obtain the real depth of the objects in the scene. Image recognition is used to separate the object from the background. This ultimately leads to just concentrating on the object which in the present context is the driver.
APA, Harvard, Vancouver, ISO, and other styles
25

Downie, Timothy Ross. "Wavlet methods in statistics." Thesis, University of Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Manay, Siddharth. "Applications of anti-geometric diffusion of computer vision : thresholding, segmentation, and distance functions." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/33626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bonam, Om Pavithra. "Automated Quantification of Biological Microstructures Using Unbiased Stereology." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3013.

Full text
Abstract:
Research in many fields of life and biomedical sciences depends on the microscopic image analysis of biological images. Quantitative analysis of these images is often time-consuming, tedious, and may be prone to subjective bias from the observer and inter /intra observer variations. Systems for automatic analysis developed in the past decade determine various parameters associated with biological tissue, such as the number of cells, object volume and length of fibers to avoid problems with manual collection of microscopic data. Specifically, automatic analysis of biological microstructures using unbiased stereology, a set of approaches designed to avoid all known sources of systematic error, plays a large and growing role in bioscience research. Our aim is to develop an algorithm that automates and increases the throughput of a commercially available, computerized stereology device (Stereologer, Stereology Resource Center, Chester, MD). The current method for estimation of first and second order parameters of biological microstructures requires a trained user to manually select biological objects of interest (cells, fibers etc.) while systematically stepping through the three dimensional volume of a stained tissue section. The present research proposes a three-part method to automate the above process: detect the objects, connect the objects through a z-stack of images (images at varying focal planes) to form a 3D object and finally count the 3D objects. The first step involves detection of objects through learned thresholding or automatic thresholding. Learned thresholding identifies the objects of interest by training on images to obtain the threshold range for objects of interest. Automatic thresholding is performed on gray level images converted from RGB (red-green-blue) microscopic images to detect the objects of interest. Both learned and automatic thresholding are followed by iterative thresholding to separate objects that are close to each other. The second step, linking objects through a z-stack of images involves labeling the objects of interest using connected component analysis and then connecting these labeled objects across the stack of images to produce a 3D object. Finally, the number of linked objects in a 3D volume is counted using the counting rules of stereology. This automatic approach achieves an overall object detection rate of 74%. Thus, these results support the view that automatic image analysis combined with unbiased sampling as well as assumption and model-free geometric probes, provides accurate and efficient quantification of biological objects.
APA, Harvard, Vancouver, ISO, and other styles
28

Nina, Oliver. "Text Segmentation of Historical Degraded Handwritten Documents." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2585.

Full text
Abstract:
The use of digital images of handwritten historical documents has increased in recent years. This has been possible through the Internet, which allows users to access a vast collection of historical documents and makes historical and data research more attainable. However, the insurmountable number of images available in these digital libraries is cumbersome for a single user to read and process. Computers could help read these images through methods known as Optical Character Recognition (OCR), which have had significant success for printed materials but only limited success for handwritten ones. Most of these OCR methods work well only when the images have been preprocessed by getting rid of anything in the image that is not text. This preprocessing step is usually known as binarization. The binarization of images of historical documents that have been affected by degradation and that are of poor image quality is difficult and continues to be a focus of research in the field of image processing. We propose two novel approaches to attempt to solve this problem. One combines recursive Otsu thresholding and selective bilateral filtering to allow automatic binarization and segmentation of handwritten text images. The other adds background normalization and a post-processing step to the algorithm to make it more robust and to work even for images that present bleed-through artifacts. Our results show that these techniques help segment the text in historical documents better than traditional binarization techniques.
APA, Harvard, Vancouver, ISO, and other styles
29

Vantaram, Sreenath Rao. "Fast unsupervised multiresolution color image segmentation using adaptive gradient thresholding and progressive region growing /." Online version of thesis, 2009. http://hdl.handle.net/1850/9016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Yong. "VIrginia Urban Dynamics Study Using DMSP/OLS Nighttime Imagery." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/104235.

Full text
Abstract:
Urban dynamics at regional scales has been increasingly important for economics, policies, and land use planning, and monitoring regional scale urban dynamics has become an urgent need in recent years. This study illustrated the use of time series nighttime light (NTL) data from the United States Air Force Defense Meteorological Satellites Program/Operational Linescan System (DMSP/OLS) to delineate urban boundaries and tracked three key urban changes: land cover change, population growth and GDP growth within Virginia. NTL data from different years were inter-calibrated to be comparable by using linear regression model and Pseudo Invariant Features (PIFs) method. Urban patches were delineated by applying thresholding techniques based on digital number (DN) values extracted from DMSP/OLS imagery. Compounded Night Light Index (CNLI) values were calculated to help estimate GDP, and these processes were applied in a time series from 2000 to 2010. Spatial patterns of DN change and the variation of CNLI indicate that human activities were increasing during the 10 years in Virginia. Accuracy of the results was confirmed using ancillary data sources from the U.S. Census and NLCD imagery.
Master of Science
Urban areas concentrate built environment, population, and economic activities, therefore, generating urban sprawl is a simultaneous result of land-use change, economic growth, population growth and so on. Remote sensing has been used to map urban sprawl within individual cities for a long time, while there has been less research focused on regional scale urban dynamics. However, the regional scale urban dynamics for economics, formulating policies, and land use planning has been increasingly important, and monitoring regional scale urban dynamics has become an urgent need in recent years. Here, we illustrated the use of multi-temporal United States Air Force Satellites data to help monitor urban sprawls by delineating urban patches and we measured a variety of urban changes, such as urban population growth and land cover change within Virginia based on the delineation. For doing so, digital number values, which measures the brightness of satellite imagery, were extracted and other relative index values were calculated based on digital number values, and these processes were applied in a time series from 2000 to 2010. Spatial patterns of digital number values change and the variation of another light index values indicate that human activities were increasing during the 10 years in Virginia.
APA, Harvard, Vancouver, ISO, and other styles
31

Sorwar, Golam 1969. "A novel distance-dependent thresholding strategy for block-based performance scalability and true object motion estimation." Monash University, Gippsland School of Computing and Information Technology, 2003. http://arrow.monash.edu.au/hdl/1959.1/5510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Almotiri, Jasem. "A Multi-Anatomical Retinal Structure Segmentation System for Automatic Eye Screening Using Morphological Adaptive Fuzzy Thresholding." Thesis, University of Bridgeport, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10975223.

Full text
Abstract:

Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue to detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This thesis proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images.

The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogenous anatomical structures.

APA, Harvard, Vancouver, ISO, and other styles
33

Sahtout, Mohammad Omar. "Improving the performance of the prediction analysis of microarrays algorithm via different thresholding methods and heteroscedastic modeling." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/17914.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Haiyan Wang
This dissertation considers different methods to improve the performance of the Prediction Analysis of Microarrays (PAM). PAM is a popular algorithm for high-dimensional classification. However, it has a drawback of retaining too many features even after multiple runs of the algorithm to perform further feature selection. The average number of selected features is 2611 from the application of PAM to 10 multi-class microarray human cancer datasets. Such a large number of features make it difficult to perform follow up study. This drawback is the result of the soft thresholding method used in the PAM algorithm and the thresholding parameter estimate of PAM. In this dissertation, we extend the PAM algorithm with two other thresholding methods (hard and order thresholding) and a deep search algorithm to achieve better thresholding parameter estimate. In addition to the new proposed algorithms, we derived an approximation for the probability of misclassification for the hard thresholded algorithm under the binary case. Beyond the aforementioned work, this dissertation considers the heteroscedastic case in which the variances for each feature are different for different classes. In the PAM algorithm the variance of the values for each predictor was assumed to be constant across different classes. We found that this homogeneity assumption is invalid for many features in most data sets, which motivates us to develop the new heteroscedastic version algorithms. The different thresholding methods were considered in these algorithms. All new algorithms proposed in this dissertation are extensively tested and compared based on real data or Monte Carlo simulation studies. The new proposed algorithms, in general, not only achieved better cancer status prediction accuracy, but also resulted in more parsimonious models with significantly smaller number of genes.
APA, Harvard, Vancouver, ISO, and other styles
34

Herrmann, Felix J., and Gilles Hennenfent. "Non-linear data continuation with redundant frames." Canadian Society of Exploration Geophysicists, 2005. http://hdl.handle.net/2429/518.

Full text
Abstract:
We propose an efficient iterative data interpolation method using continuity along reflectors in seismic images via curvelet and discrete cosine transforms. The curvelet transform is a new multiscale transform that provides sparse representations for images that comprise smooth objects separated by piece-wise smooth discontinuities (e.g. seismic images). The advantage of using curvelets is that these frames are sparse for high-frequency caustic-free solutions of the wave-equation. Since we are dealing with less than ideal data (e.g. bandwidth-limited), we compliment the curvelet frames with the discrete cosine transform. The latter is motivated by the successful data continuation with the discrete Fourier transform. By choosing generic basis functions we circumvent the necessity to make parametric assumptions (e.g. through linear/parabolic Radon or demigration) regarding the shape of events in seismic data. Synthetic and real data examples demonstrate that our algorithm provides interpolated traces that accurately reproduce the wavelet shape as well as the AVO behavior along events in shot gathers.
APA, Harvard, Vancouver, ISO, and other styles
35

Yanni, Mamdouh. "The influence of thresholding and spatial resolution variations on the performance of the complex moments descriptor feature extractor." Thesis, University of Kent, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Larkins, Robert L. "Off-line signature verification." The University of Waikato, 2009. http://hdl.handle.net/10289/2803.

Full text
Abstract:
In today’s society signatures are the most accepted form of identity verification. However, they have the unfortunate side-effect of being easily abused by those who would feign the identification or intent of an individual. This thesis implements and tests current approaches to off-line signature verification with the goal of determining the most beneficial techniques that are available. This investigation will also introduce novel techniques that are shown to significantly boost the achieved classification accuracy for both person-dependent (one-class training) and person-independent (two-class training) signature verification learning strategies. The findings presented in this thesis show that many common techniques do not always give any significant advantage and in some cases they actually detract from the classification accuracy. Using the techniques that are proven to be most beneficial, an effective approach to signature verification is constructed, which achieves approximately 90% and 91% on the standard CEDAR and GPDS signature datasets respectively. These results are significantly better than the majority of results that have been previously published. Additionally, this approach is shown to remain relatively stable when a minimal number of training signatures are used, representing feasibility for real-world situations.
APA, Harvard, Vancouver, ISO, and other styles
37

CHEN, JUN-SHENG, and 陳俊勝. "Thresholding for edge detection." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/92488543130129325075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

ZHANG, ZHAO-ZHI, and 張昭智. "Studies on image thresholding." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/66199721659502775939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ensafi, Pegah. "Weighted Opposition-Based Fuzzy Thresholding." Thesis, 2011. http://hdl.handle.net/10012/5796.

Full text
Abstract:
With the rapid growth of the digital imaging, image processing techniques are widely involved in many industrial and medical applications. Image thresholding plays an essential role in image processing and computer vision applications. It has a vast domain of usage. Areas such document image analysis, scene or map processing, satellite imaging and material inspection in quality control tasks are examples of applications that employ image thresholding or segmentation to extract useful information from images. Medical image processing is another area that has extensively used image thresholding to help the experts to better interpret digital images for a more accurate diagnosis or to plan treatment procedures. Opposition-based computing, on the other hand, is a recently introduced model that can be employed to improve the performance of existing techniques. In this thesis, the idea of oppositional thresholding is explored to introduce new and better thresholding techniques. A recent method, called Opposite Fuzzy Thresholding (OFT), has involved fuzzy sets with opposition idea, and based on some preliminary experiments seems to be reasonably successful in thresholding some medical images. In this thesis, a Weighted Opposite Fuzzy Thresholding method (WOFT) will be presented that produces more accurate and reliable results compared to the parent algorithm. This claim has been verified with some experimental trials using both synthetic and real world images. Experimental evaluations were conducted on two sets of synthetic and medical images to validate the robustness of the proposed method in improving the accuracy of the thresholding process when fuzzy and oppositional ideas are combined.
APA, Harvard, Vancouver, ISO, and other styles
40

Chang, Shih-Huang, and 張世鍠. "Image Thresholding and Its Applications." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/68195654062316748008.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
86
Thresholding is a simple, efficient method for image processing. It often serves as a pre-process for many applications. There are several problems in thresholding: which criterion has to be applied? how many thresholds are necessary? how to preserve the local information? Those are taken into consideration in the procedure of thresholding. In the thesis, we discuss the methods about 1-D thresholdings including Otsu'' s, Pun''s, Kapur''s, moment preserving, minimum error, Reddi''s, etc; 2-D thresholdings includi
APA, Harvard, Vancouver, ISO, and other styles
41

"Robust wavelet thresholding for noise suppression." Massachusetts Institute of Technology, Laboratory for Information and Decision Systems, 1996. http://hdl.handle.net/1721.1/3446.

Full text
Abstract:
I.C. Schick, H. Krim.
Cover title.
Includes bibliographical references (p. 4).
Supported in part by the Army Research Office DAAL-03-92-G-115 Supported in part by the Air Force Office of Scientific Research. F49620-95-1-0083, BU GC12391NGD
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Kuei-Yu, and 李奎諭. "Thresholding technique with adaptive window selection." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/20315316833800442375.

Full text
Abstract:
碩士
玄奘大學
資訊管理學系碩士班
94
Thresholding technique is a useful method for image segmentation. In addition, thresholding is used as a preprocessing of pattern recognition, and also can be applied in the medical image processing. We propose a novel method that group the gray level from image histogram, and adopt Otsu's method to search optimal threshold in two stage. Experimental results show proposed method can reduce the computation time, and the resulted image is similar to that of Otsu’s method. Besides, we proposed a novel technique for image thresholding with adaptive window selection for uneven lighting image. The advantage of this technique is its effectiveness in eliminating ghost objcet and reducing misclassification error. For rapidly to find the optimal solution, we adopt simulated annealing by adaptively selecting image window size based on extended of pyramid data structure. Experimental results show the superior performance of this technique.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Wan-Yue, and 陳宛渝. "Fast Adaptive PNN-Based Thresholding Algorithms." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/08370008745755482942.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊管理系
89
Thresholding is a fundamental operation in image processing. Based on the pairwise nearest neighbor (PNN) technique and the variance criterion, this theme presents two fast adaptive thresholding algorithms. On a set of different real images, experimental results reveal that the proposed first algorithm is faster than the previous three algorithms considerably while having a good feature-preserving capability. The previous three mentioned algorithms need exponential time. Given a specific peak-signal-to-noise-ratio (PSNR), we further present the second thresholding algorithm to determine the number of thresholds as few as possible in order to obtain a thresholded image satisfying the given PSNR. Some experiments are carried out to demonstrate the thresholded images that are encouraging. Since the time complexities required in our proposed two thresholding algorithms are polynomial, they could meet the real-time demand in image preprocessing.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Chung-Tai, and 陳崇泰. "Assessment of Digital Image Thresholding Algorithms." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/84497913294066083615.

Full text
Abstract:
碩士
崑山科技大學
電子工程研究所
98
Computer Vision Systems acquire digital images by means of camera lens. The quality of these images are greatly improved through pre-process to the possibility of success of follow up process. The objects and backgrounds can be separated in order to catch the information needed during the analyzing process of morphology and object characteristics identification. The Matlab platform contains strong abilities of matrix calculation and image processing; hence, by means this powerful tool, this article aims to compute the complex mathematics for the purpose to discuss the research of different algorithms on digital image processing. This article utilizes the seven algorithms : (1) Global image thresholding (2) Local adaptive thresholding (3) Auto thresholding - isodata (4) Optimal thresholding(5) Fuzzy C-mean clustering (6) Iterative Method (7) Global and Region Features in search of the proper threshold value. This article applies various algorithms to simplify the images in this research to obtain binary images, during which accurate and efficiency will be analyzed as well. The follow-up procedure is worked in use of morphological processing for post operation out of further related applications after binary images.
APA, Harvard, Vancouver, ISO, and other styles
45

Lin, Ya-Ting, and 林雅婷. "Texture Image Segmentation Using Adaptive Thresholding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/mzbr2k.

Full text
Abstract:
碩士
臺中技術學院
資訊科技與應用研究所
97
Texture is one of the important issues in image processing. It was used in many applications such as image segmentation, image retrieval, pattern recognition, and so on. Image segmentation is the pre-processing in many applications; where the segment results affect the following processing. This thesis proposes an adaptive threshold in texture image segmentation. The adaptive threshold can be determined according training images and is used to assess the distance between two similar texture regions. In an image, regions containing different texture features are recognized as different objects. The directionality relationships of each pixel and its neighbors are described as patterns and the probabilities of the pattern correlations are used as texture features. The conventional split-and-merge procedure method used fixed thresholds in the splitting and the merging processes. However, an adequate threshold is difficult to be determined. We proposed to compute an adaptive threshold according to the texture features in the training images. Our method includes splitting, agglomerative merging, and boundary refinement. The adaptive threshold is used in the agglomerative merging process. The experimental results demonstrated that our method can segment texture objects successfully.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Hsi-Ming, and 楊希明. "Multi-thresholding Character Extraction in a Map." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/05169391959659989786.

Full text
Abstract:
碩士
國立交通大學
資訊工程研究所
83
Maps provide many pieces of information such as countries, cities, rivers, etc, which are useful to human beings in geographic information system(GIS). How to extract the infor- mation automatically from a map to build a data base for user retrieval ia one of the goals of a GIS. Character extraction is one of the essential tasks for entering the map information into a computer. Because the input to the system is a grayscale map, we propose a method to extract Chinese characters in the map via the multi-thresholding scheme. It consists of two phase. In the first phase, we extract the Chinese characters from a map, which is represented by multi-level values. After performing binariza- tion, the character extraction operations containing three processes, named blurring, connected component extraction and rotation angle detection are conducted repeatively based on di- fferent thresholds. Once characters are extracted from the map, they are sent to a statistical- based character recognition module and substracted from the map. In the second phase, we extract simple components from the remained map by the run-length method to remove long components, which may be road lines. Then charac- ter extraction operations used in the first phase are performed again to extract characters. These extracted characters are also sent to the recognition module. Our testing sample maps contain 571 Chinese characters. Among them, 471 Chinese characters are correctly extracted out. The extraction rate for our system is 82.31%.
APA, Harvard, Vancouver, ISO, and other styles
47

Chang, Jung Shiong, and 張俊雄. "Automatic Multi-level Thresholding on Thermal Images." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/52205597843210622394.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
83
A new wavelet-based automatic multi-level thresholding techn- ique is proposed. The new technique is a generalized version of the method proposed by Olivo. In his paper, Olivo proposed to use a set of dilated wavelets to convolve with the histogram of an image. For each scale, a set of thresholds was determined aut- omatically based on the rules he proposed. However, Olivo did not provide a systematic way to decide an exact set of thresholds which corresponds to a specific scale that can make the segmenta- tion result best. In this thesis, we propose to use a cost func- tion as a guidance to solve the above problem. Experimental results show that our approach can always automatically select the best scale for performing multi- level thresholding.
APA, Harvard, Vancouver, ISO, and other styles
48

Fu, Li Yao, and 李曜輔. "Color Image Segmentation Using Circular Histogram Thresholding." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/42055582127526155434.

Full text
Abstract:
碩士
國立中央大學
資訊及電子工程研究所
82
A circular histogram thresholding for color image segmentation is proposed. At first, a hue circular histogram is constructed based on an UCS (I,H,S) color space. Next, the histogram is smoothed by a scale-space filter, then the circular histogram is transformed to a traditional histogram form. At last the histogram is recursively threshold based on the maximum principle of analysis of variance. Three comparisons of performance are reported, which are (1) the proposed thresholding on the circular histogram and a traditional histogram; (2) the proposed thresholding and clustering; (3) thresholding on hue attributes of UCS and non-UCS color spaces. Several benefits of proposed approach are identified in the experiments.
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Jian-Min, and 黃健旻. "Multilevel Optimal Thresholding Method for Image Segmentation." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/69114026014074061950.

Full text
Abstract:
碩士
國立中興大學
應用數學系所
101
In this paper, we propose a new evolutionary algorithm which is combined with lookup table method (LUT). This algorithm can be applied in the selection for optimal multilevel thresholds of image segmentation. The objection function adopted in the algorithm is the same as Otsu’s method (between-class variance of image histogram). In our algorithm, the optimal multilevel thresholds are determined by maximizing the between-class variance. In order to increase computing speed, our algorithm is implemented in the MATLAB by using matrix operation and is combined with lookup table method to calculate between-class variance quickly. For the verification, the numerical experiment is compared with three existing algorithm: enumerative algorithm, Liao algorithm and genetic algorithm. As the number of thresholds increase, experimental result shows that our algorithm has more advantage in comparing with other methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Jiang, Yu-Yang, and 江宇洋. "Automatic Color Edge Detection by Entropic Thresholding." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/74075981026292618978.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
97
In this thesis, we propose automatic color edge detection techniques based on vector order statistics and principal component analysis by entropic thresholding. Both methods employed improved entropic thresholding to determine the edge threshold. Our color edge detection techniques can detect edges when neighboring objects have different hues but with similar intensities, which cannot be detected by known grayscale or color edge detectors. Furthermore, by using entropic thresholding we can automatically determine an optimal threshold which is adaptive to different image contents without manual intervention. Edge detection by our proposed scheme is very user friendly and confident.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography